Once a week.
Subscribe to our weekly newsletter.
This Twitter algorithm predicts mental illness better than trained professionals
A supervised learning algorithm can predict clinical depression much earlier and more accurately than trained health professionals.
One of the more surprising, and upsetting, uses of social media has been suicides performed on Facebook Live. Though reasons for suicide are complex, the mere threat is often a cry for help, acceptance, or recognition. During the two years I worked as a patient monitor in an emergency room, I discovered most people that attempt to take their own lives desire a pair of ears to listen to their problems more than anything else.
It's hard to gauge a person's reality based on social media habits, however. Those who spout vitriolic rhetoric are often quite approachable and reserved in person. We can't read inflections and temperament from words on a screen, or take into consideration that the person might just be having a bad day.
That said, social media can be a powerful indicator of those at risk for suffering from mental health disorders, a new study published in Scientific Reports suggests. A team led by Andrew Reece, in the Department of Psychology at Harvard, collected Twitter data from 204 individuals. Of those, 105 suffered from depression, with a control of 99 healthy subjects. The team then used a supervised learning algorithm to see if changes in language predicted clinical depression.
The answer is yes. Depressed patients used more words like death, no, and never, while posting fewer positive words—like happy, beach, and photo—in the lead-up to their diagnosis.
Figure 4. Depression word-shift graph revealing contributions to difference in Twitter happiness observed between depressed (5.98) and healthy (6.11) participants. In column 3, (−) indicates a relatively negative word, and (+) indicates a relatively positive word, both with respect to the average happiness of all healthy tweets. An up (down) arrow indicates that word was used more (less) by the depressed class. Words on the left (right) contribute to a decrease (increase) in happiness in the depressed class. [Credit: Andrew G. Reece et. al.]
A second group pf 174 Twitter users was also studied. Of these, 63 suffered from PTSD. Again, changes in language revealed that they were likely to be diagnosed.
These results are not perfect. In both situations there was a preselected pool of Twitter users with a close ratio of healthy to unhealthy, which does not reflect society as a whole. Add to this fact that many depressed people or those suffering from PTSD do not use social media. It would be hard to acquire firm numbers based on these shortcomings.
That said, Reece and his team are borrowing this predictive model from similar early warning systems in place for hard-to-detect cancers, disease outbreaks, and regional dietary health issues. Diseases like addiction and suicidal ideation have already been studied through social media. While this trend of using public facing data to detect potential cognitive disorders is new, cries for help might be detected, and treated, much sooner.
Reece and team believe they have found if not a silver bullet for predicting depression and PTSD, at least a shinier one than has so far been developed:
Our findings strongly support the claim that computational methods can effectively screen Twitter data for indicators of depression and PTSD. Our method identified these mental health conditions earlier and more accurately than the performance of trained health professionals, and was more precise than previous computational approaches.
With the current rise in depression and anxiety, especially among teens, a particularly vulnerable group that has now fully grown up on social media, such predictive tools could prove to be a valuable source of therapy and recovery moving forward.
"We hope that our research will eventually help improve mental health care, for example in preventive screening," Stanford researcher Katharina Lix told Digital Trends. “We could imagine clinicians using this technology as a supporting tool during a patient's initial assessment, provided that the patient has agreed to have their social media data used in this way. However, before we get to that point, the technology needs to be validated using a larger sample of people that's representative of the general population. We want to emphasize that any real-world application of this technology must carefully take into account ethical and privacy concerns."
Derek is the author of Whole Motion: Training Your Brain and Body For Optimal Health. Based in Los Angeles, he is working on a new book about spiritual consumerism. Stay in touch on Facebook and Twitter.
"Deepfakes" and "cheap fakes" are becoming strikingly convincing — even ones generated on freely available apps.
- A writer named Magdalene Visaggio recently used FaceApp and Airbrush to generate convincing portraits of early U.S. presidents.
- "Deepfake" technology has improved drastically in recent years, and some countries are already experiencing how it can weaponized for political purposes.
- It's currently unknown whether it'll be possible to develop technology that can quickly and accurately determine whether a given video is real or fake.
The future of deepfakes<p>In 2018, Gabon's president Ali Bongo had been out of the country for months receiving medical treatment. After Bongo hadn't been seen in public for months, rumors began swirling about his condition. Some suggested Bongo might even be dead. In response, Bongo's administration released a video that seemed to show the president addressing the nation.</p><p>But the <a href="https://www.facebook.com/watch/?v=324528215059254" target="_blank">video</a> is strange, appearing choppy and blurry in parts. After political opponents declared the video to be a deepfake, Gabon's military attempted an unsuccessful coup. What's striking about the story is that, to this day, experts in the field of deepfakes can't conclusively verify whether the video was real. </p><p>The uncertainty and confusion generated by deepfakes poses a "global problem," according to a <a href="https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/#cancel" target="_blank">2020 report from The Brookings Institution</a>. In 2018, the U.S. Department of Defense released some of the first tools able to successfully detect deepfake videos. The problem, however, is that deepfake technology keeps improving, meaning forensic approaches may forever be one step behind the most sophisticated forms of deepfakes. </p><p>As the 2020 report noted, even if the private sector or governments create technology to identify deepfakes, they will:</p><p style="margin-left: 20px;">"...operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. "A lie can go halfway around the world before the truth can get its shoes on," warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo. And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes."</p>
Context is everything.
The COVID-19 pandemic has introduced a number of new behaviours into daily routines, like physical distancing, mask-wearing and hand sanitizing. Meanwhile, many old behaviours such as attending events, eating out and seeing friends have been put on hold.
A new study looks at how images of coffee's origins affect the perception of its premiumness and quality.
- Images can affect how people perceive the quality of a product.
- In a new study, researchers show using virtual reality that images of farms positively influence the subjects' experience of coffee.
- The results provide insights on the psychology and power of marketing.