Skip to content
Personal Growth

Could New Tools Help Prevent Suicide and Self-Harm?

Social media is uniquely positioned to detect suicidal tendencies. Facebook’s new algorithm offers better detection, live chat support from crisis-support organizations (via messenger), and integrated suicide prevention tools to help people in real time.
A sign for an emergency phone is seen on the span of the Golden Gate Bridge. An estimated 1,300 people are believed to have jumped to their death from the bridge since it was opened in 1937. (Photo by Justin Sullivan/Getty Images)
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Facebook has announced a set of new tools to help detect suicidal intent messages, live videos, and posts. It has released these tools as a response to a growing international public health problem: one person commits suicide every 40 seconds, according the World Health Organization (WHO) Report Preventing Suicide: A Global Imperative.


There are three reasons why social media is in a unique position to detect, as well as to help prevent, suicide and other grave self-harm.

    • First, there is an age factor: among young people 15-29 years of age, suicide is the second leading cause of death globally, according to the WHO. This cohort is a core user base of social media.
    • Second, the WHO notes that suicides occur in all regions of the world. This makes the networked systems of social media more efficient trackers of suicidal messages at a global level.
    • Third, the WHO points out that most suicides occur in low- and middle-income countries where resources are limited for early identification of people in need. This is where Facebook’s early identification can be highly suitable.
    • In addition, there exists a phenomenon of suicides on social media itself, whereby people have actually broadcast their own suicides to online audiences.

      Amanda Todd was a tragic young victim of cyberbullying. She was driven by cruelty and loneliness to broadcast her own suicide (2012). (Photo by Mladen Antonov / Getty Images).

      Incidences of mortal self-harm that are influenced by, or projected through, social media are alarming. Cases have erupted around the world that are leading to tremendous concern. Consider the case of the sinister Blue Whale social media ‘game’, which  led to more than 130 teenage deaths in Russia.

      An academic study on Chinese social media found that, in the broadcasting of a suicidal message, whereas a minority of observers did respond with positive messages, many chose to remain passive viewers. In Canada, the now infamous case of a repeatedly bullied girl, Amanda Todd, drew national attention after she broadcasted her suicide on Youtube.

      Facebook’s new propositions include:

          • Integrated suicide prevention tools to help people in real time on Facebook Live
          • Live chat support from crisis support organizations through Messenger
          • Streamlined reporting for suicide, assisted by artificial intelligence
          • This builds on existing mechanisms that Facebook has put in place for suicide prevention, which were developed in collaboration with mental health organizations such as Save.org, National Suicide Prevention Lifeline, Forefront and Crisis Text Line.

            The new tools depend on three important elements: partnerships with key mental health organizations, proactive user engagement, and Artificial Intelligence methods for filtering and reporting suicides.

            According to the World Health Organization, most suicides occur in low- and middle-income countries where resources are limited for early identification of people in need. This is where Facebook’s early identification can be highly suitable. (Photo by Paula Bronstein / Getty Images)

            While Facebook’s move should be seen in a positive light, given its power to save the lives of vulnerable people, it is also important to understand the balance that must be struck between safety and privacy.

            This is by no means an easy tightrope to walk. On one hand, greater oversight of messages with potentially suicidal insinuations requires a greater intrusion into user outputs.

            On the other hand, the early flagging of suicidal tendencies, which on social media have a statistically robust correspondence to actual self-harm, can protect people from serious damage.

            So long as a balance is carefully set between safety and privacy, these new tools for suicide prevention may indeed prove life-saving for users who find themselves most alone and most vulnerable.

            Sign up for the Smarter Faster newsletter
            A weekly newsletter featuring the biggest ideas from the smartest people

            Related

            Up Next