Article Type
Changed
Tue, 12/11/2018 - 15:02
Display Headline
Can social media help mental health practitioners prevent suicides?

Suicide is the tenth leading cause of death among Americans and the third leading cause among those age 15 to 24.1 As many as 36% of suicide victims leave a sui­cide note.2 Researchers have analyzed such notes with the aim of identifying specific content and patterns that might aid in creating more effective strategies for preventing suicide.3-5

One study found that the presence of a suicide note is an indi­cator of serious intent; that is, when the initial attempt fails, those who had left a suicide note were found to be at increased risk of subsequent completed suicide.4 Researchers also found that 75% of suicide notes contained the theme “apology/shame,” suggesting that many suicide victims might have welcomed an alternative to suicide to solve their personal predicament. Tragically, however, most suicide notes are not discovered until suicide has been attempted or completed.4

That’s where social media comes in. As platforms for self-expression, social networking sites such as Facebook, Twitter, and Tumblr are sources of real-time information that could aid in suicide prevention.6 With that in mind, we:
   • present 2 cases in which a patient announced her sui­cidal ideation on Facebook
   • consider the opportunities that social media present for early intervention
   • propose high-tech monitoring methods for high-risk patients.


CASE 1
Major depressive disorder (MDD) and nonadherence
Ms. S, age 24, has a 4-year history of MDD and treatment non­adherence. She had no history of suicide attempt or inpatient treatment, but she had briefly engaged in psychotherapy before discontinuing visits. Physically healthy and employed as a security officer, Ms. S recently broke up with her boy­friend who had abused her physically—and against whom she had an order of protection.

On the day in question, Ms. S posted several status updates on Facebook expressing hope­lessness, which, over the course of the day, esca­lated to expression of frank suicidal ideation:
   • “I am ugly, no man would ever want to live with me.”
   • “I have made no effect on the world and I’m just a waste of space.”
   • “It’s sad that I want to die but such is life. We all die one day.”
   • “I’m going to kill myself. It was nice knowing you world. Goodbye everyone.”


CASE 2
Substance abuse and previous suicide attempt
Ms. B, age 21, had a remote (approximately age 16) history of a suicide attempt and was actively abusing 3,4-methylenedioxymeth­amphetamine (MDMA [“Ecstasy,” “Molly”]) and Cannabis. She was not receiving outpatient care. One afternoon, Ms. B walked into the emergency department (ED) and said she had just taken 17 ibuprofen pills with the intent of killing herself.

On initial evaluation, Ms. B was irritable and uncooperative, denying all psychiatric symptoms and refusing to divulge details of her recent behavior. Her mother, who had not accompanied her daughter to the ED, reported that Ms. B had engaged in excessive risk-taking—speeding, driving while intoxi­cated, having multiple sex partners—for the past 5 years, resulting in several arrests for minor offenses, and she had been depressed and was sleeping and eating poorly in the 2 weeks leading up to the suicide attempt.

Two days ago, her mother added, Ms. B had posted disturbing notes on Facebook: ”Life is useless,” she declared in one post; “I’d be better off dead,” in another.


Suicidal content online

Worldwide, Facebook has 1.35 billion active users each month.7 Thus far, a limited num­ber of posts indicating suicidal intent have been reported in the lay press,8 but evidence suggests that the use of social media for this purpose is an emerging trend.9

A search of the literature yielded only 3 case reports.8,10,11 In one case, a delayed response to a suicide note resulted in a fail­ure to prevent the suicide.8 In another, a cli­nician’s discovery of a patient’s explicitly suicidal Facebook post led to what the team leader described as a more meaningful thera­peutic relationship.10 The clinician’s discov­ery might have been pivotal in preventing the patient from committing suicide.

The authors of these case reports explored the idea of using Facebook for suicide pre­vention, raising a number of practical and ethical issues. Among them are the poten­tial for immediate intervention by other Facebook users and the extent to which sui­cidal posts on social media sites induce copy­cat suicides.8

Issues associated with clinicians’ use of social media to follow or monitor patients include the ethical concepts of beneficence and nonmaleficence, privacy and confi­dentiality, clinical judgment, and informed consent,8,10 including potential benefit and harm and the difference between actual and perceived privacy violations. Bennett et al11 recommend developing guidelines for the use of social media to enhance medical care and provide appropriate protections to both patients and providers.

Reporting suicidal content. Although the primary purpose of Facebook is to give users the opportunity to share life events and thoughts with friends and family, the com­pany does address the question of suicidal content in its Help Center (Box 1).12 As our 2 cases illustrate, however, intervention can be significantly delayed.

 

 


CASE 1 CONTINUED
Call to 911
Fortunately for Ms. S, a friend who read her Facebook posts called 911; even then, however, 16 hours passed between the initial postings and the patient’s arrival at the ED. When emer­gency medical services brought Ms. S to the Comprehensive Psychiatry Emergency Program, she acknowledged suicidal ideation with­out an active plan. Other symptoms included depressed mood, a sense of hopelessness, feel­ings of worthlessness lasting >2 months, low self-esteem, dissatisfaction with body image, and a recent verbal altercation with a friend.

Ms. S was admitted to the inpatient unit for further observation and stabilization.


CASE 2 CONTINUED
No one answered her calls

Ms. B, who did not arrive at the ED until 2 days after her suicidal posts, corroborated the history given by her mother. She also reported that she had attempted to reach out to her friends for support, but no one had answered her phone calls. She felt hurt because of this, Ms. B said, and impulsively ingested the pills.

Ms. B said she regretted the suicide attempt. Nevertheless, in light of her recent attempt and persistent distress, she was admitted to the inpa­tient unit for observation and stabilization.


Can artificial intelligence help?
There is no effective means of tracking high-risk patients after their first contact with the mental health system, despite the fact that (1) those who attempt suicide are at high risk of subsequent suicide attempts3 and (2) we have the potential to prevent future attempts based on self-expressed online cues. We pro­pose machine learning algorithms—a branch of artificial intelligence—to capture and process suicide notes on Facebook in real time.

Machine learning
can be broadly defined as computational methods using experience to improve performance or make accurate pre­dictions. In this context, “experience” refers to past information, typically in the form of electronic data collected and analyzed to design accurate and efficient predictive algorithms. Machine learning, which incor­porates fundamental concepts in computer science, as well as statistics, probability, and optimization, already has been established in a variety of applications, such as detecting e-mail spam, natural language processing, and computational biology.13

Affective computing, known as emotion-oriented computing, is a branch of artifi­cial intelligence that involves the design of systems and devices that can recognize, interpret, and process human moods and emotions (Box 2).14


Prediction models,
developed by Poulin et al15 to estimate the risk of suicide (based on keywords and multiword phrases from unstructured clinical notes from a national sample of U.S. Veterans Administration medical records), resulted in an inference accuracy of ≥65%. Pestian et al16 created and annotated a collection of suicide notes—a vital resource for scientists to use for machine learning and data mining. Machine learning algorithms based on such notes and clini­cal data might be used to capture alarming social media posts by high-risk patients and activate crisis management, with potentially life-saving results.


But limitations remain

It is not easy to identify or analyze people’s emotions based on social media posts; emo­tions can be implicit, based on specific events or situations. To distinguish among different emotions purely on the basis of keywords is to deal in great subtlety. Framing algorithms to include multiple parameters—the dura­tion of suicidal content and the number of suicidal posts, for example—would help mitigate the risk of false alarms.

Another problem is that not all Facebook profiles are public. In fact, only 28% of users share all or most of their posts with anyone other than their friends.17 This limitation could be addressed by urging patients iden­tified as being at high risk of suicide during an initial clinical encounter with a mental health provider to “friend” a generic Web page created by the hospital or clinic to pro­tect patients’ privacy.

As Levahot et al10 wrote in their report of the patient whose clinician discovered a patient’s explicitly suicidal Facebook post, the incident “did not hinder the therapeu­tic alliance.” Instead, the team leader said, the discovery deepened the therapeutic relationship and helped the patient “better understand his mental illness and need for increased support.”


Bottom Line

Machine learning algorithms offer the possibility of analyzing status updates from patients who express suicidal ideation on social media and alerting clinicians to the need for early intervention. There are steps clinicians can take now, however, to take advantage of Facebook, in particular, to monitor and potentially prevent suicide attempts by those at high risk.

Related Resource
• Ahuja AK, Biesaga K, Sudak DM, et al. Suicide on Facebook. J Psychiatr Pract. 2014;20(2):141-146.


Acknowledgement

Zafar Sharif MD, Associate Clinical Professor of Psychiatry, Columbia University College of Physicians and Surgeons, and Director of Psychiatry, Harlem Hospital Center, New York, New York, and Michael Yogman MD, Assistant Clinical Professor of Pediatrics, Harvard Medical School, Boston Children’s Hospital, Boston, Massachusetts, provided insight into the topic and useful feedback on the manuscript of this article.

 

 

Disclosures
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.

References


1. Centers for Disease Control and Prevention. Web-based Injury Statistics Query and Reporting System (WISQARS) 2010. http://www.cdc.gov/injury/wisqars/index.html. Updated July 7, 2014. Accessed January 19, 2015.
2. Shioiri T, Nishimura A, Akazawa K, et al. Incidence of note-leaving remains constant despite increasing suicide rates. Psychiatr Clin Neurosci. 2005;59(2):226-228.
3. Barr W, Leitner M, Thomas J. Self-harm or attempted suicide? Do suicide notes help us decide the level of intent in those who survive? Accid Emerg Nurs. 2007;15(3):122-127.
4. Foster T. Suicide note themes and suicide prevention. Int J Psychiatry Med. 2003;33(4):323-331.
5. Bhatia MS, Verma SK, Murty OP. Suicide notes: psychological and clinical profile. Int J Psychiatry Med. 2006;36(2):163-170.
6. Jashinsky J, Burton SH, Hanson CL, et al. Tracking suicide risk factors through Twitter in the US. Crisis. 2014;35(1):51-59.
7. Facebook news room. Company info. http://newsroom. fb.com/company-info. Accessed January 7, 2015.
8. Ruder TD, Hatch GM, Ampanozi G, et al. Suicide announcement on Facebook. Crisis. 2011;32(5):280-282.
9. Luxton DD, June JD, Fairall JM. Social media and suicide: a public health perspective. Am J Public Health. 2012;102(suppl 2):S195-S200.
10. Lehavot K, Ben-Zeev D, Neville RE. Ethical considerations and social media: a case of suicidal postings on Facebook. Journal of Dual Diagnosis. 2012;8(4):341-346.
11. Bennett A, Pourmand A, Shokoohi H, et al. Impacts of social networking sites on patient care in the emergency department. Telemed J E Health. 2014;20(1):94-96.
12. How to report suicidal content/threats on Facebook. h t tps ://www. facebook.com/notes/amer ican-foundation-for-suicide-prevention/how-to-report-suicidal-contentthreats-on-facebook/10150090259398144. Published February 15, 2011. Accessed January 22, 2015.
13. Mohri M, Rostamizadeh A, Talwalker A. Foundations of machine learning (adaptive computation and machine learning series). Cambridge, MA: MIT Press; 2012:14.
14. Blázquez Gil G, Berlanga de Jesús A, Molina Lopéz JM. Combining machine learning techniques and natural language processing to infer emotions using Spanish Twitter corpus. Communications in Computer and Information Science. 2013;365:149-157.
15. Poulin C, Shiner B, Thompson P, et al. Predicting the risk of suicide by analyzing the text of clinical notes. PLoS One. 2014;9(1):e85733.
16. Pestian JP, Matykiewicz P, Linn-Gust M. What’s in a note: construction of a suicide note corpus. Biomed Inform Insights. 2012;5:1-6.
17. ConsumerReports.org. Facebook & your privacy. http:// www.consumerreports.org/cro/magazine/2012/06/ facebook-your-privacy/index.html. Published June 2012. Accessed January 22, 2015

Article PDF
Author and Disclosure Information

Vasanth Kattalai Kailasam, MD
PGY-3
Department of Psychiatry
Columbia University College of Physicians and Surgeons
Harlem Hospital Center
New York, New York

Erin Samuels, MD
Clinical Instructor in Psychiatry
Columbia University College of Physicians and Surgeons
Attending Psychiatrist, Comprehensive Psychiatric
Emergency Program
Harlem Hospital Center
New York, New York

Issue
Current Psychiatry - 14(2)
Publications
Topics
Page Number
37-39, 51
Legacy Keywords
suicide, social media, suicide prevention, preventing suicide, reporting someone who is suicidal, depressive disorders, major depressive disorder, MDD, substance abuse, suicidal content
Sections
Author and Disclosure Information

Vasanth Kattalai Kailasam, MD
PGY-3
Department of Psychiatry
Columbia University College of Physicians and Surgeons
Harlem Hospital Center
New York, New York

Erin Samuels, MD
Clinical Instructor in Psychiatry
Columbia University College of Physicians and Surgeons
Attending Psychiatrist, Comprehensive Psychiatric
Emergency Program
Harlem Hospital Center
New York, New York

Author and Disclosure Information

Vasanth Kattalai Kailasam, MD
PGY-3
Department of Psychiatry
Columbia University College of Physicians and Surgeons
Harlem Hospital Center
New York, New York

Erin Samuels, MD
Clinical Instructor in Psychiatry
Columbia University College of Physicians and Surgeons
Attending Psychiatrist, Comprehensive Psychiatric
Emergency Program
Harlem Hospital Center
New York, New York

Article PDF
Article PDF

Suicide is the tenth leading cause of death among Americans and the third leading cause among those age 15 to 24.1 As many as 36% of suicide victims leave a sui­cide note.2 Researchers have analyzed such notes with the aim of identifying specific content and patterns that might aid in creating more effective strategies for preventing suicide.3-5

One study found that the presence of a suicide note is an indi­cator of serious intent; that is, when the initial attempt fails, those who had left a suicide note were found to be at increased risk of subsequent completed suicide.4 Researchers also found that 75% of suicide notes contained the theme “apology/shame,” suggesting that many suicide victims might have welcomed an alternative to suicide to solve their personal predicament. Tragically, however, most suicide notes are not discovered until suicide has been attempted or completed.4

That’s where social media comes in. As platforms for self-expression, social networking sites such as Facebook, Twitter, and Tumblr are sources of real-time information that could aid in suicide prevention.6 With that in mind, we:
   • present 2 cases in which a patient announced her sui­cidal ideation on Facebook
   • consider the opportunities that social media present for early intervention
   • propose high-tech monitoring methods for high-risk patients.


CASE 1
Major depressive disorder (MDD) and nonadherence
Ms. S, age 24, has a 4-year history of MDD and treatment non­adherence. She had no history of suicide attempt or inpatient treatment, but she had briefly engaged in psychotherapy before discontinuing visits. Physically healthy and employed as a security officer, Ms. S recently broke up with her boy­friend who had abused her physically—and against whom she had an order of protection.

On the day in question, Ms. S posted several status updates on Facebook expressing hope­lessness, which, over the course of the day, esca­lated to expression of frank suicidal ideation:
   • “I am ugly, no man would ever want to live with me.”
   • “I have made no effect on the world and I’m just a waste of space.”
   • “It’s sad that I want to die but such is life. We all die one day.”
   • “I’m going to kill myself. It was nice knowing you world. Goodbye everyone.”


CASE 2
Substance abuse and previous suicide attempt
Ms. B, age 21, had a remote (approximately age 16) history of a suicide attempt and was actively abusing 3,4-methylenedioxymeth­amphetamine (MDMA [“Ecstasy,” “Molly”]) and Cannabis. She was not receiving outpatient care. One afternoon, Ms. B walked into the emergency department (ED) and said she had just taken 17 ibuprofen pills with the intent of killing herself.

On initial evaluation, Ms. B was irritable and uncooperative, denying all psychiatric symptoms and refusing to divulge details of her recent behavior. Her mother, who had not accompanied her daughter to the ED, reported that Ms. B had engaged in excessive risk-taking—speeding, driving while intoxi­cated, having multiple sex partners—for the past 5 years, resulting in several arrests for minor offenses, and she had been depressed and was sleeping and eating poorly in the 2 weeks leading up to the suicide attempt.

Two days ago, her mother added, Ms. B had posted disturbing notes on Facebook: ”Life is useless,” she declared in one post; “I’d be better off dead,” in another.


Suicidal content online

Worldwide, Facebook has 1.35 billion active users each month.7 Thus far, a limited num­ber of posts indicating suicidal intent have been reported in the lay press,8 but evidence suggests that the use of social media for this purpose is an emerging trend.9

A search of the literature yielded only 3 case reports.8,10,11 In one case, a delayed response to a suicide note resulted in a fail­ure to prevent the suicide.8 In another, a cli­nician’s discovery of a patient’s explicitly suicidal Facebook post led to what the team leader described as a more meaningful thera­peutic relationship.10 The clinician’s discov­ery might have been pivotal in preventing the patient from committing suicide.

The authors of these case reports explored the idea of using Facebook for suicide pre­vention, raising a number of practical and ethical issues. Among them are the poten­tial for immediate intervention by other Facebook users and the extent to which sui­cidal posts on social media sites induce copy­cat suicides.8

Issues associated with clinicians’ use of social media to follow or monitor patients include the ethical concepts of beneficence and nonmaleficence, privacy and confi­dentiality, clinical judgment, and informed consent,8,10 including potential benefit and harm and the difference between actual and perceived privacy violations. Bennett et al11 recommend developing guidelines for the use of social media to enhance medical care and provide appropriate protections to both patients and providers.

Reporting suicidal content. Although the primary purpose of Facebook is to give users the opportunity to share life events and thoughts with friends and family, the com­pany does address the question of suicidal content in its Help Center (Box 1).12 As our 2 cases illustrate, however, intervention can be significantly delayed.

 

 


CASE 1 CONTINUED
Call to 911
Fortunately for Ms. S, a friend who read her Facebook posts called 911; even then, however, 16 hours passed between the initial postings and the patient’s arrival at the ED. When emer­gency medical services brought Ms. S to the Comprehensive Psychiatry Emergency Program, she acknowledged suicidal ideation with­out an active plan. Other symptoms included depressed mood, a sense of hopelessness, feel­ings of worthlessness lasting >2 months, low self-esteem, dissatisfaction with body image, and a recent verbal altercation with a friend.

Ms. S was admitted to the inpatient unit for further observation and stabilization.


CASE 2 CONTINUED
No one answered her calls

Ms. B, who did not arrive at the ED until 2 days after her suicidal posts, corroborated the history given by her mother. She also reported that she had attempted to reach out to her friends for support, but no one had answered her phone calls. She felt hurt because of this, Ms. B said, and impulsively ingested the pills.

Ms. B said she regretted the suicide attempt. Nevertheless, in light of her recent attempt and persistent distress, she was admitted to the inpa­tient unit for observation and stabilization.


Can artificial intelligence help?
There is no effective means of tracking high-risk patients after their first contact with the mental health system, despite the fact that (1) those who attempt suicide are at high risk of subsequent suicide attempts3 and (2) we have the potential to prevent future attempts based on self-expressed online cues. We pro­pose machine learning algorithms—a branch of artificial intelligence—to capture and process suicide notes on Facebook in real time.

Machine learning
can be broadly defined as computational methods using experience to improve performance or make accurate pre­dictions. In this context, “experience” refers to past information, typically in the form of electronic data collected and analyzed to design accurate and efficient predictive algorithms. Machine learning, which incor­porates fundamental concepts in computer science, as well as statistics, probability, and optimization, already has been established in a variety of applications, such as detecting e-mail spam, natural language processing, and computational biology.13

Affective computing, known as emotion-oriented computing, is a branch of artifi­cial intelligence that involves the design of systems and devices that can recognize, interpret, and process human moods and emotions (Box 2).14


Prediction models,
developed by Poulin et al15 to estimate the risk of suicide (based on keywords and multiword phrases from unstructured clinical notes from a national sample of U.S. Veterans Administration medical records), resulted in an inference accuracy of ≥65%. Pestian et al16 created and annotated a collection of suicide notes—a vital resource for scientists to use for machine learning and data mining. Machine learning algorithms based on such notes and clini­cal data might be used to capture alarming social media posts by high-risk patients and activate crisis management, with potentially life-saving results.


But limitations remain

It is not easy to identify or analyze people’s emotions based on social media posts; emo­tions can be implicit, based on specific events or situations. To distinguish among different emotions purely on the basis of keywords is to deal in great subtlety. Framing algorithms to include multiple parameters—the dura­tion of suicidal content and the number of suicidal posts, for example—would help mitigate the risk of false alarms.

Another problem is that not all Facebook profiles are public. In fact, only 28% of users share all or most of their posts with anyone other than their friends.17 This limitation could be addressed by urging patients iden­tified as being at high risk of suicide during an initial clinical encounter with a mental health provider to “friend” a generic Web page created by the hospital or clinic to pro­tect patients’ privacy.

As Levahot et al10 wrote in their report of the patient whose clinician discovered a patient’s explicitly suicidal Facebook post, the incident “did not hinder the therapeu­tic alliance.” Instead, the team leader said, the discovery deepened the therapeutic relationship and helped the patient “better understand his mental illness and need for increased support.”


Bottom Line

Machine learning algorithms offer the possibility of analyzing status updates from patients who express suicidal ideation on social media and alerting clinicians to the need for early intervention. There are steps clinicians can take now, however, to take advantage of Facebook, in particular, to monitor and potentially prevent suicide attempts by those at high risk.

Related Resource
• Ahuja AK, Biesaga K, Sudak DM, et al. Suicide on Facebook. J Psychiatr Pract. 2014;20(2):141-146.


Acknowledgement

Zafar Sharif MD, Associate Clinical Professor of Psychiatry, Columbia University College of Physicians and Surgeons, and Director of Psychiatry, Harlem Hospital Center, New York, New York, and Michael Yogman MD, Assistant Clinical Professor of Pediatrics, Harvard Medical School, Boston Children’s Hospital, Boston, Massachusetts, provided insight into the topic and useful feedback on the manuscript of this article.

 

 

Disclosures
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.

Suicide is the tenth leading cause of death among Americans and the third leading cause among those age 15 to 24.1 As many as 36% of suicide victims leave a sui­cide note.2 Researchers have analyzed such notes with the aim of identifying specific content and patterns that might aid in creating more effective strategies for preventing suicide.3-5

One study found that the presence of a suicide note is an indi­cator of serious intent; that is, when the initial attempt fails, those who had left a suicide note were found to be at increased risk of subsequent completed suicide.4 Researchers also found that 75% of suicide notes contained the theme “apology/shame,” suggesting that many suicide victims might have welcomed an alternative to suicide to solve their personal predicament. Tragically, however, most suicide notes are not discovered until suicide has been attempted or completed.4

That’s where social media comes in. As platforms for self-expression, social networking sites such as Facebook, Twitter, and Tumblr are sources of real-time information that could aid in suicide prevention.6 With that in mind, we:
   • present 2 cases in which a patient announced her sui­cidal ideation on Facebook
   • consider the opportunities that social media present for early intervention
   • propose high-tech monitoring methods for high-risk patients.


CASE 1
Major depressive disorder (MDD) and nonadherence
Ms. S, age 24, has a 4-year history of MDD and treatment non­adherence. She had no history of suicide attempt or inpatient treatment, but she had briefly engaged in psychotherapy before discontinuing visits. Physically healthy and employed as a security officer, Ms. S recently broke up with her boy­friend who had abused her physically—and against whom she had an order of protection.

On the day in question, Ms. S posted several status updates on Facebook expressing hope­lessness, which, over the course of the day, esca­lated to expression of frank suicidal ideation:
   • “I am ugly, no man would ever want to live with me.”
   • “I have made no effect on the world and I’m just a waste of space.”
   • “It’s sad that I want to die but such is life. We all die one day.”
   • “I’m going to kill myself. It was nice knowing you world. Goodbye everyone.”


CASE 2
Substance abuse and previous suicide attempt
Ms. B, age 21, had a remote (approximately age 16) history of a suicide attempt and was actively abusing 3,4-methylenedioxymeth­amphetamine (MDMA [“Ecstasy,” “Molly”]) and Cannabis. She was not receiving outpatient care. One afternoon, Ms. B walked into the emergency department (ED) and said she had just taken 17 ibuprofen pills with the intent of killing herself.

On initial evaluation, Ms. B was irritable and uncooperative, denying all psychiatric symptoms and refusing to divulge details of her recent behavior. Her mother, who had not accompanied her daughter to the ED, reported that Ms. B had engaged in excessive risk-taking—speeding, driving while intoxi­cated, having multiple sex partners—for the past 5 years, resulting in several arrests for minor offenses, and she had been depressed and was sleeping and eating poorly in the 2 weeks leading up to the suicide attempt.

Two days ago, her mother added, Ms. B had posted disturbing notes on Facebook: ”Life is useless,” she declared in one post; “I’d be better off dead,” in another.


Suicidal content online

Worldwide, Facebook has 1.35 billion active users each month.7 Thus far, a limited num­ber of posts indicating suicidal intent have been reported in the lay press,8 but evidence suggests that the use of social media for this purpose is an emerging trend.9

A search of the literature yielded only 3 case reports.8,10,11 In one case, a delayed response to a suicide note resulted in a fail­ure to prevent the suicide.8 In another, a cli­nician’s discovery of a patient’s explicitly suicidal Facebook post led to what the team leader described as a more meaningful thera­peutic relationship.10 The clinician’s discov­ery might have been pivotal in preventing the patient from committing suicide.

The authors of these case reports explored the idea of using Facebook for suicide pre­vention, raising a number of practical and ethical issues. Among them are the poten­tial for immediate intervention by other Facebook users and the extent to which sui­cidal posts on social media sites induce copy­cat suicides.8

Issues associated with clinicians’ use of social media to follow or monitor patients include the ethical concepts of beneficence and nonmaleficence, privacy and confi­dentiality, clinical judgment, and informed consent,8,10 including potential benefit and harm and the difference between actual and perceived privacy violations. Bennett et al11 recommend developing guidelines for the use of social media to enhance medical care and provide appropriate protections to both patients and providers.

Reporting suicidal content. Although the primary purpose of Facebook is to give users the opportunity to share life events and thoughts with friends and family, the com­pany does address the question of suicidal content in its Help Center (Box 1).12 As our 2 cases illustrate, however, intervention can be significantly delayed.

 

 


CASE 1 CONTINUED
Call to 911
Fortunately for Ms. S, a friend who read her Facebook posts called 911; even then, however, 16 hours passed between the initial postings and the patient’s arrival at the ED. When emer­gency medical services brought Ms. S to the Comprehensive Psychiatry Emergency Program, she acknowledged suicidal ideation with­out an active plan. Other symptoms included depressed mood, a sense of hopelessness, feel­ings of worthlessness lasting >2 months, low self-esteem, dissatisfaction with body image, and a recent verbal altercation with a friend.

Ms. S was admitted to the inpatient unit for further observation and stabilization.


CASE 2 CONTINUED
No one answered her calls

Ms. B, who did not arrive at the ED until 2 days after her suicidal posts, corroborated the history given by her mother. She also reported that she had attempted to reach out to her friends for support, but no one had answered her phone calls. She felt hurt because of this, Ms. B said, and impulsively ingested the pills.

Ms. B said she regretted the suicide attempt. Nevertheless, in light of her recent attempt and persistent distress, she was admitted to the inpa­tient unit for observation and stabilization.


Can artificial intelligence help?
There is no effective means of tracking high-risk patients after their first contact with the mental health system, despite the fact that (1) those who attempt suicide are at high risk of subsequent suicide attempts3 and (2) we have the potential to prevent future attempts based on self-expressed online cues. We pro­pose machine learning algorithms—a branch of artificial intelligence—to capture and process suicide notes on Facebook in real time.

Machine learning
can be broadly defined as computational methods using experience to improve performance or make accurate pre­dictions. In this context, “experience” refers to past information, typically in the form of electronic data collected and analyzed to design accurate and efficient predictive algorithms. Machine learning, which incor­porates fundamental concepts in computer science, as well as statistics, probability, and optimization, already has been established in a variety of applications, such as detecting e-mail spam, natural language processing, and computational biology.13

Affective computing, known as emotion-oriented computing, is a branch of artifi­cial intelligence that involves the design of systems and devices that can recognize, interpret, and process human moods and emotions (Box 2).14


Prediction models,
developed by Poulin et al15 to estimate the risk of suicide (based on keywords and multiword phrases from unstructured clinical notes from a national sample of U.S. Veterans Administration medical records), resulted in an inference accuracy of ≥65%. Pestian et al16 created and annotated a collection of suicide notes—a vital resource for scientists to use for machine learning and data mining. Machine learning algorithms based on such notes and clini­cal data might be used to capture alarming social media posts by high-risk patients and activate crisis management, with potentially life-saving results.


But limitations remain

It is not easy to identify or analyze people’s emotions based on social media posts; emo­tions can be implicit, based on specific events or situations. To distinguish among different emotions purely on the basis of keywords is to deal in great subtlety. Framing algorithms to include multiple parameters—the dura­tion of suicidal content and the number of suicidal posts, for example—would help mitigate the risk of false alarms.

Another problem is that not all Facebook profiles are public. In fact, only 28% of users share all or most of their posts with anyone other than their friends.17 This limitation could be addressed by urging patients iden­tified as being at high risk of suicide during an initial clinical encounter with a mental health provider to “friend” a generic Web page created by the hospital or clinic to pro­tect patients’ privacy.

As Levahot et al10 wrote in their report of the patient whose clinician discovered a patient’s explicitly suicidal Facebook post, the incident “did not hinder the therapeu­tic alliance.” Instead, the team leader said, the discovery deepened the therapeutic relationship and helped the patient “better understand his mental illness and need for increased support.”


Bottom Line

Machine learning algorithms offer the possibility of analyzing status updates from patients who express suicidal ideation on social media and alerting clinicians to the need for early intervention. There are steps clinicians can take now, however, to take advantage of Facebook, in particular, to monitor and potentially prevent suicide attempts by those at high risk.

Related Resource
• Ahuja AK, Biesaga K, Sudak DM, et al. Suicide on Facebook. J Psychiatr Pract. 2014;20(2):141-146.


Acknowledgement

Zafar Sharif MD, Associate Clinical Professor of Psychiatry, Columbia University College of Physicians and Surgeons, and Director of Psychiatry, Harlem Hospital Center, New York, New York, and Michael Yogman MD, Assistant Clinical Professor of Pediatrics, Harvard Medical School, Boston Children’s Hospital, Boston, Massachusetts, provided insight into the topic and useful feedback on the manuscript of this article.

 

 

Disclosures
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.

References


1. Centers for Disease Control and Prevention. Web-based Injury Statistics Query and Reporting System (WISQARS) 2010. http://www.cdc.gov/injury/wisqars/index.html. Updated July 7, 2014. Accessed January 19, 2015.
2. Shioiri T, Nishimura A, Akazawa K, et al. Incidence of note-leaving remains constant despite increasing suicide rates. Psychiatr Clin Neurosci. 2005;59(2):226-228.
3. Barr W, Leitner M, Thomas J. Self-harm or attempted suicide? Do suicide notes help us decide the level of intent in those who survive? Accid Emerg Nurs. 2007;15(3):122-127.
4. Foster T. Suicide note themes and suicide prevention. Int J Psychiatry Med. 2003;33(4):323-331.
5. Bhatia MS, Verma SK, Murty OP. Suicide notes: psychological and clinical profile. Int J Psychiatry Med. 2006;36(2):163-170.
6. Jashinsky J, Burton SH, Hanson CL, et al. Tracking suicide risk factors through Twitter in the US. Crisis. 2014;35(1):51-59.
7. Facebook news room. Company info. http://newsroom. fb.com/company-info. Accessed January 7, 2015.
8. Ruder TD, Hatch GM, Ampanozi G, et al. Suicide announcement on Facebook. Crisis. 2011;32(5):280-282.
9. Luxton DD, June JD, Fairall JM. Social media and suicide: a public health perspective. Am J Public Health. 2012;102(suppl 2):S195-S200.
10. Lehavot K, Ben-Zeev D, Neville RE. Ethical considerations and social media: a case of suicidal postings on Facebook. Journal of Dual Diagnosis. 2012;8(4):341-346.
11. Bennett A, Pourmand A, Shokoohi H, et al. Impacts of social networking sites on patient care in the emergency department. Telemed J E Health. 2014;20(1):94-96.
12. How to report suicidal content/threats on Facebook. h t tps ://www. facebook.com/notes/amer ican-foundation-for-suicide-prevention/how-to-report-suicidal-contentthreats-on-facebook/10150090259398144. Published February 15, 2011. Accessed January 22, 2015.
13. Mohri M, Rostamizadeh A, Talwalker A. Foundations of machine learning (adaptive computation and machine learning series). Cambridge, MA: MIT Press; 2012:14.
14. Blázquez Gil G, Berlanga de Jesús A, Molina Lopéz JM. Combining machine learning techniques and natural language processing to infer emotions using Spanish Twitter corpus. Communications in Computer and Information Science. 2013;365:149-157.
15. Poulin C, Shiner B, Thompson P, et al. Predicting the risk of suicide by analyzing the text of clinical notes. PLoS One. 2014;9(1):e85733.
16. Pestian JP, Matykiewicz P, Linn-Gust M. What’s in a note: construction of a suicide note corpus. Biomed Inform Insights. 2012;5:1-6.
17. ConsumerReports.org. Facebook & your privacy. http:// www.consumerreports.org/cro/magazine/2012/06/ facebook-your-privacy/index.html. Published June 2012. Accessed January 22, 2015

References


1. Centers for Disease Control and Prevention. Web-based Injury Statistics Query and Reporting System (WISQARS) 2010. http://www.cdc.gov/injury/wisqars/index.html. Updated July 7, 2014. Accessed January 19, 2015.
2. Shioiri T, Nishimura A, Akazawa K, et al. Incidence of note-leaving remains constant despite increasing suicide rates. Psychiatr Clin Neurosci. 2005;59(2):226-228.
3. Barr W, Leitner M, Thomas J. Self-harm or attempted suicide? Do suicide notes help us decide the level of intent in those who survive? Accid Emerg Nurs. 2007;15(3):122-127.
4. Foster T. Suicide note themes and suicide prevention. Int J Psychiatry Med. 2003;33(4):323-331.
5. Bhatia MS, Verma SK, Murty OP. Suicide notes: psychological and clinical profile. Int J Psychiatry Med. 2006;36(2):163-170.
6. Jashinsky J, Burton SH, Hanson CL, et al. Tracking suicide risk factors through Twitter in the US. Crisis. 2014;35(1):51-59.
7. Facebook news room. Company info. http://newsroom. fb.com/company-info. Accessed January 7, 2015.
8. Ruder TD, Hatch GM, Ampanozi G, et al. Suicide announcement on Facebook. Crisis. 2011;32(5):280-282.
9. Luxton DD, June JD, Fairall JM. Social media and suicide: a public health perspective. Am J Public Health. 2012;102(suppl 2):S195-S200.
10. Lehavot K, Ben-Zeev D, Neville RE. Ethical considerations and social media: a case of suicidal postings on Facebook. Journal of Dual Diagnosis. 2012;8(4):341-346.
11. Bennett A, Pourmand A, Shokoohi H, et al. Impacts of social networking sites on patient care in the emergency department. Telemed J E Health. 2014;20(1):94-96.
12. How to report suicidal content/threats on Facebook. h t tps ://www. facebook.com/notes/amer ican-foundation-for-suicide-prevention/how-to-report-suicidal-contentthreats-on-facebook/10150090259398144. Published February 15, 2011. Accessed January 22, 2015.
13. Mohri M, Rostamizadeh A, Talwalker A. Foundations of machine learning (adaptive computation and machine learning series). Cambridge, MA: MIT Press; 2012:14.
14. Blázquez Gil G, Berlanga de Jesús A, Molina Lopéz JM. Combining machine learning techniques and natural language processing to infer emotions using Spanish Twitter corpus. Communications in Computer and Information Science. 2013;365:149-157.
15. Poulin C, Shiner B, Thompson P, et al. Predicting the risk of suicide by analyzing the text of clinical notes. PLoS One. 2014;9(1):e85733.
16. Pestian JP, Matykiewicz P, Linn-Gust M. What’s in a note: construction of a suicide note corpus. Biomed Inform Insights. 2012;5:1-6.
17. ConsumerReports.org. Facebook & your privacy. http:// www.consumerreports.org/cro/magazine/2012/06/ facebook-your-privacy/index.html. Published June 2012. Accessed January 22, 2015

Issue
Current Psychiatry - 14(2)
Issue
Current Psychiatry - 14(2)
Page Number
37-39, 51
Page Number
37-39, 51
Publications
Publications
Topics
Article Type
Display Headline
Can social media help mental health practitioners prevent suicides?
Display Headline
Can social media help mental health practitioners prevent suicides?
Legacy Keywords
suicide, social media, suicide prevention, preventing suicide, reporting someone who is suicidal, depressive disorders, major depressive disorder, MDD, substance abuse, suicidal content
Legacy Keywords
suicide, social media, suicide prevention, preventing suicide, reporting someone who is suicidal, depressive disorders, major depressive disorder, MDD, substance abuse, suicidal content
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media