Skip to main content

Can Facebook’s Machine-Learning Algorithms Accurately Predict Suicide?

The social media giant aims to save lives by quickly flagging and responding to worrying posts

When Naika Venant killed herself in January, the Miami-area teen broadcast the event for two hours on Facebook’s popular video live-streaming feature, Facebook Live. A friend of hers saw the video and alerted police but aid did not arrive in time to save the 14-year-old’s life. Other young people have also recently posted suicidal messages on social media platforms including Twitter, Tumblr and Live.me.

In a bid to save lives Facebook and other social media giants are now wading into suicide prevention work—creating new alert systems designed to better identify and help at-risk individuals. Last Wednesday Facebook unveiled a new suite of tools including the company’s first pattern recognition algorithms to spot users who may be suicidal or at risk of lesser self-harm. Facebook says the new effort will help it flag concerning posts and connect users with mental health services. It also represents a new front in its machine learning.

Suicide is now the 10th-leading cause of death in this country and the second-leading cause of deaths among youth, so social media could be an important intervention point, says psychologist Daniel Reidenberg, executive director of Save.org, one of Facebook’s partner mental health organizations. Facebook currently reports more than a billion daily users around the world. In the U.S. 71 percent of teens between 13 and 17 and 62 percent of adults over 18 have a presence on it, according to two 2015 reports by the Pew Research Center.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To reach its at-risk users, Facebook says it is expanding its services that allow friends to report posts containing signs of any suicidal or self-mutilation plans and it provides a menu of options for both those individuals and the friend who reported them. Choices include hotlines to call, prompts to reach out to friends and tips on what to do in moments of crisis. This tool will now be available for Facebook live streams as well. Similar reporting systems exist on a number of social media platforms including Twitter, Pinterest and YouTube. Facebook is now also piloting a program that will let people use Messenger, its instant messaging app, to directly connect with counselors from crisis support organizations including Crisis Text Line and the National Suicide Prevention Lifeline (NSPL).

Facebook also plans to use pattern recognition algorithms to identify people who may be at risk of self-harm, and to provide them with resources to help. The company says its new artificial intelligence program, which will be rolled out a on a limited basis at first, will employ machine learning to identify posts that suggest suicidal thoughts—even if no one on Facebook has yet reported it.

William Nevius, a spokesperson at Facebook, says the machine-learning algorithms will use two signals—one from words or phrases that relate to suicide or self-harm in users’ posts and the other from comments added by concerned friends—to determine whether someone is at risk. If the pattern recognition program identifies concerning posts, the “report post” buttonwill appear more prominently to visually encourage users to click it. “The hope is that the artificial intelligence learning will pick up on multiple signals from various points, [put] that together and activate a response, both to the person who might be at risk and to others who [could help],” Reidenberg wrote in an e-mail.

If those cues signal a higher level of urgency, the system will automatically alert Facebook’s Community Operations team—a group of staff members who provide technical support and monitor the site for issues such as bullying or hacked accounts. The team will quickly review the post and determine whether the person needs additional support. If so, they make sure the user will see a page of resources appear on his or her news feed. (That page would normally only pop up if a post were reported by a concerned friend.)

To help its artificial intelligence learn to flag concerning posts, Facebook mined “tens of thousands of posts that have been reported by friends who are concerned about another friend,” explains John Draper, project director of the NSPL, also a Facebook partner organization.

Although the current algorithms are limited to text, Facebook may eventually use AI to identify worrying photos and videos as well. CEO Mark Zuckerberg announced last month that the company has been “researching systems that can look at photos and videos to flag content our team should review” as a part of efforts to assess reported content, including suicides, bullying and harassment. “This is still very early in development but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content for our community,” Zuckerberg wrote. Nevius did not provide information about when these additional tools might be applied.

An Early Signal

Some mental health experts say AI is still limited at identifying suicide risk via language alone. “I think [machine learning] is a step in the right direction,” says Joseph Franklin, a psychologist at Florida State University who studies suicide risk. Franklin and his colleagues recently conducted a meta-analysis of 365 studies from 1965 to 2014. They found that despite decades of research, experts’ ability to detect future suicide attempts have remained no better than chance. “There’s just a tiny predictive signal,” Franklin says. These limitations have spurred him and others to work on developing machine-learning algorithms that help assess risk by analyzing data from electronic health records. “The limitation of health records is that…we can accurately predict [risk] over time, but we don’t know what day they’re going to attempt suicide,” Franklin says. Social media could be very helpful at providing a clearer sense of timing, he adds. But that, too, would still have key limitations: “There's just not a lot you can tell from text, even using more complicated natural language processing, because people can use the words ‘suicide’ or ‘kill myself’ for many different reasons—and you don't know if someone is using it in a particular way.”

Some researchers such as Glen Coppersmith, founder and CEO of the mental health analytics company Qntfy, have discovered useful signals in language alone. In a recent examination of publicly available Twitter data, Coppersmith and his colleagues found the emotional content of posts—including text and emojis—could be indicative of risk. He notes, however, these are still “little pieces of the puzzle,” adding, “The other side of it, the sort of nonlanguage signal, is timing.” “Facebook has information about when you’re logging in, when you’re chatting…and what hours are you logging in, [which are] really interesting signals that might be relevant to whether or not you’re at proximal risk for suicide.”

Craig Bryan, a researcher at the University of Utah who investigates suicide risk in veteran populations, has started to examine the importance of timing in the path to suicide. “In our newer research, we’ve been looking at the temporal patterns of sequences as they emerge—where [we find] it’s not just having lots of posts about depression or alcohol use, for instance, [but] it’s the order in which you write them,” he says.

Another important factor to consider, especially with teens, is how often their language changes, says Megan Moreno, a pediatrician specializing in adolescent medicine at Seattle Children’s Hospital. In a 2016 study Moreno and colleagues discovered that on Instagram (a social media platform for sharing photos and video), once a self-injury–related hashtag was banned or flagged as harmful, numerous spin-off versions would emerge. For example, when Instagram blocked #selfharm, replacements with alternate spelling (#selfharmmm and #selfinjuryy) or slang (#blithe and #cat) emerged. “I continue to think that machine learning is always going to be a few steps behind the way adolescents communicate,” Moreno says. “As much as I admire these efforts, I think we can’t rely on them to be the only way to know whether a kid is struggling.”

“The bottom line is that it’s definitely an effort that makes a lot of sense, given the fact that a lot of people connect with social media,” says Jessica Ribeiro, a psychologist applying machine learning to suicide prevention research at Florida State. “At the same time, they’re limited by what the science in this area doesn’t know—and unfortunately, we don't know a lot despite decades of research.”