Facebook Puts AI on Suicide Watch

Posted on by Sean Williamson

Facebook Puts AI on Suicide WatchAs the debate about the merits and dangers of Artificial Intelligence, or AI, rages on, luminaries including Stephen Hawking have warned that the technology could spell the end of the human race. Facebook, on the other hand, continues to hold the position that AI can be used for a lot of good and to drive initiatives that do just that through the Facebook Artificial Intelligence Research unit.

The FAIR unit, as the research body is abbreviated, recently conducted experiments to see how chatbots were able to develop negotiating skills. The researchers neglected to instruct the chatbots to converse in languages that people could understand, and so they developed their own. The new language was indecipherable to many, and sparked a fresh wave of what the perils of AI could be.

However, as the FAIR unit explained, the language development is the kind of thing that is often seen in the AI world and is not a cause for great concern. The experiment was not terminated, as reports suggested, its parameters were just changed. The somewhat panicked reporting of the story has been criticised by many experts as irresponsible, and FAIR has some very interesting projects lined up that use AI to help and support people. Among the most promising is a suicide prevention initiative.

Help and Support for Those In Need

Facebook’s AI bots will scan written posts of all kinds, including live-stream videos and the comments that people leave on posts, for signs of suicidal intentions or behaviours. These will then be prioritised, and shown to human moderators in the order that the bots have deemed most to least urgent. The evaluation process will use a pattern recognition process to scan posts for content that has been flagged as dangerous in the past.

Once the information has been passed on to moderators, they will be able to send local first responders to very critical cases, and to offer support and information including advice lifelines to the individual whose posts were flagged and to friends of that person. In the same way that online and mobile casino personnel are trained to identify and deal with problem gambling behaviour, the moderators here are trained in making the important decisions of how to deal with chatbot-identified indicators of suicidal behaviour.

The “proactive detection” technology has been tested in the United States for the past several months, and is now being rolled out across the world. The tools will not be available in the European Union, however and, though Facebook has not explicitly said why, it is probably due to the complicated data protection laws that EU countries have in place. As the project continues to develop, one of its goals is to provide support in all the languages that Facebook supports.

Looking to The Future

AI lends a helping handFor some people, the fact that opting out of the suicide-flagging software is not an option and the other things that Facebook could do with its proactive detection AI are enough to make them wary about a kind of dystopian AI-controlled future. However, the company does seem to understand the importance of using AI technology responsibly, as chief security officer Alex Stamos articulated in a tweet. He said that “creepy/scary/malicious use of AI will be a risk forever”, which was why setting good norms now in terms of “weighing data use versus utility” was so essential.

In addition to understanding the gravitas of using AI technology for good and to prevent it from getting out of control, Facebook is also excited about the potential of its proactive detection tools. Chief Executive Officer Mark Zuckerberg has commented that in the future AI will be able to identify subtler bullying, hate speech and suicidal statements that use more sophisticated and abstract language. The dangers of AI are quite terrifying, but the potential that Facebook is highlighting is just as exciting!