Facebook Using AI To Identify Suicide Risks Through User Posts

Ravi Singh - Jan 09, 2019


Facebook Using AI To Identify Suicide Risks Through User Posts

Facebook has been scanning through every single content posted on its platform to detect suicide risks. Despite the good purpose, it still poses privacy risks.

  • Facebook posts all get scanned for checking suicidal thoughts
  • Facebook passes on the information to law enforcement officials for a wellness assessment
  • Facebook conducting the program without users' permission poses privacy risks which may result in exposure or worse.

A project by Facebook to avoid suicide with AI started in March 2017.

Following a chain of live-streamed suicides on Facebook, the attempt in using suicide prevention tool to identify suicide risks strived to proactively sovle the issue.

A year after, however, due to numerous Facebook's data privacy scandals, many privacy specialists have raised questions about whether users can rely on Facebook to let it create and store their litigable mental health information without approval.

Facebook's intention to develop users' health information system despite its difference in privacy standard to healthcare providers

The algorism reaches almost every post of Facebook users, rating each of them on a 0-1 scale (1 = the highest chance of impending harm).

Natasha Duarte - policy analyst from CDT - has expressed concern about the process of this data creation. She stated that the data collected should be regarded as sensitive health information. And people storing this kind of data or creating those about others should be taking it as sensitive health information.

As Duarte said, Facebook's current algorithm for suicide prevention is not governed by US data protection laws applied for health information. In this country, there is an agency to protect personal health information called HIPAA. It establishes national standards to protect people's medical records and other personal health information. Unfortunately, only those organizations supplying healthcare services like insurance companies and hospitals are governed by HIPAA.

Facebook or any similar company making an assumption about one's health based on data sources that are non-medical won't have to follow such privacy requirements. Also, in accordance with Facebook, they supposed so and didn't identify the information made as sensitive health information.

This company has shown a lack of transparency in the privacy protocols related to the created data about suicide. A representative from Facebook stated that before deleting those suicide probability scores which are not high enough to be reviewed, they still keep them for nearly a month. However, there was no response from Facebook about detailed information on the duration of data on higher risks and the form of subsequent interferences.

Facebook, otherwise, wouldn't make detailed reasons for keeping data without escalating them.

Could users' mental health information become Facebook's next security breach?

Facebook's algorithm may take suicide prevention to the next level from hotlines

In accordance with privacy specialists, storing such personal information is highly dangerous without accurate foresight and protection.

The most obvious risk now is the information sensitivity to data exposure. According to Matthew Erickson, an Industry Outreach Director at the DPA, the question is not about whether Facebook gets hacked but about when it will happen.

In September 2018, nearly 30 million Facebook users’ private information was accessed by hackers in a security breach. Among those, 400,000 users' photos and status were left accessible. The question remained unanswered about whether the data from Facebook's suicide prevention tool was involved in that privacy breach.

From the Ashley Madison data breach, it's clear that the threat of storing sensitive information does exist. "Will someone be able to Google your mental health information from Facebook the next time you go for a job interview?" Erickson said.

"Pick a company that hasn't had a data breach anymore," said Dan Reidenberg, mental health and suicide prevention specialist who supported Facebook in launching the suicide prevention tool.

Considering the threat of creating and holding such data, Reidenberg showed that the point is the discrimination against those with mental illness. He claimed that bias against psychological illness is prohibited by The ADA, which resulted in the practicability in court of the least potential outcomes.

Who has the right to view information Facebook collects on mental health?

If a post gets marked as posing suicidal possibility, it will be reported to Facebook’s content moderator team. Facebook consistently claimed that the team underwent specialized training on suicide but did not go into details.

In 2017, the Wall Street Journal published a review of thousands of moderators hired by Facebook, saying most of them were contractors who were well paid but little trained on addressing disturbing contents. In response, Facebook stated that those content moderators were trained to identify contents with potential suicide risk, eating disorders, and self-mutilation by suicide experts.

As Facebook said, in the reviewing process, the user's name isn't available on the reviewed posts. Still, it might be challenging to de-identify posts on social media since many of them contain content identifying that person even without their name on it (Duarte, 2018).

Once a reviewer has marked a post as potential suicidal thoughts, they would send it to a more experienced team including specialists in law enforcement, or suicide and rape hotlines. More information about that user would then be accessible to those employees.

Reidenberg has encouraged this company to scan their profiles for many different things surround it and consider whether it can be put in context. He also claimed that only checking their history as well as recent activities could currently identify risk with efficiency.

Police take action too

At a certain level of seriousness, emergency responders will be called 

Once reviewed, there will be two possible outreach actions: offering user access to suicide prevention resources or call emergency responders.

In Mark Zuckerberg’s post, he stated that in the previous year, they had successfully supported first responders in communicating approximately 3,500 people in need globally.

According to Duarte, Facebook’s user information provided to police is the most serious privacy vulnerability of that program which may result in unnecessary contacts with law enforcement agencies.

Despite various successful interferences from its association with law enforcement, Facebook still got one report from a person who received intervention by police while they didn't contemplate suicide. Later, still, that person was sent to the hospital for mental health checks. Another case is that police provided personal information of individual who got marked as suicide risk to NYT.

Reasons why EU bans Facebook’s suicide algorithm

Facebook uses this suicide prevention tool to review posts in many languages except for those from the EU.

The EU prohibited using the algorism due to its law on data protection under the GDPR. This law requires users provide websites specific approval to composite sensitive information.

In the United States, Facebook considers its program a part of the responsibility. Moreover, Reidenberg drew an analogy between privacy trade-off and a matter that medical experts usually face.

In Reidenberg's statement, health professionals come to crucial professional decisions when they're at risk and they will take up an active rescue. Technology companies including Facebook are the same and that they need to consider whether to apply law enforcement for the purpose of saving someone.

However, a significant difference between tech companies and emergency professionals still exists.

Capture

Privacy professionals share the same opinion that a Facebook's advanced program should cause users to agreeably join in, or otherwise, give them a way to dip out on the program. However, none of those mentioned options are applicable, for now.

As Emily Cain from Facebook addressed, using Facebook means that users are choosing to have everything they posted on Facebook checked for potential suicide issue.

Suicide algorism get recognition to have a capacity for good

Many public health and privacy specialists have concurred with Facebook in the potential of their suicide prevention tool.

Every year, within 40 seconds, there is one person dying from suicide - according to WHO's first global report on suicide prevention, published in 2018. Suicide rates are high amongst vulnerable groups that experience discrimination, such as refugees, indigenous people, and LGBT.

According to a Facebook representative, based on their calculation, exposure of private information is worth the trade-off. Also, this company is trying to balance users' safety and privacy.

Capture

Understanding the essence of this issue, Facebook has provided various privacy protections.

In agreement with that calculation, Kyle McGregor from NYU admitted that suicidal thoughts in teenagers were issues that can be dealt with. Therefore, adults' responsibility is to help those youngsters overcome their difficult time to stay happy and healthy. He also insisted that any effort made to effectively prevent suicides was worth trying.

Comments

Sort by Newest | Popular

Next Story