Is It Possible To Rely On AI To Prevent Terrorism? Facebook AI Chief Seems Not To Agree With That

Mandy Seth - May 31, 2019


Is It Possible To Rely On AI To Prevent Terrorism? Facebook AI Chief Seems Not To Agree With That

AI is great, we all agree, but only with human moderation. The Christchurch shooting last year did teach us a lesson about it

AI - the superpowerful artificial intelligence - is absolutely great, it helps to solve a lot of issues surrounding our lives, but removing terrorist is none of it! CEO Mark Zuckerberg did answer the question of how Facebook and its platform can help global peace and solve terrorism. Indeed, Facebook AI’s system is very far from solving terrorism effectively! Yann LeCun - chief AI scientist of Facebook and a Turing Prize awarded expert  - shows the same comment.

AI-terrorism-prevention
AI is far from helping us prevent terrorism, the Christchurch attack has taught us that

Screening live video is initially aimed at making viral content. Then, it comes to a bigger issue when a video is streaming at a time terrorists committing crimes, Christchurch shooting in New Zealand this year was a sad and typical example of it. Facebook’s AI system failed in moderate the acts of violence and atrocities, and the attack went live on Facebook. Indeed, it was seen by less than 200 people during the streaming live, but by now, we did lose count the number of downloads and share via the Internet.

For experts like LeCun, the failure case of Facebook, as mentioned above, is not new. It was warned a long time ago that there is something that machine learning could not do, for example understanding the nuances of screening live video.

Yann-Lecun-Facebook
According to Yann LeCun, Facebook's failure to address terrorism is not surprising

Automation is very good at implementing the request, as long as those requests are clearly classified by human. In fact, it - the AI - was seen to successfully solve and block 99% of terrorist content from al-Qaeda, all automatically! Unfortunately, the 1% left is a much harder job for AI!

LeCun, at the conference in Paris, did mention about the training data problem. Indeed, training data is needed if you want to train the machine learning to recognize violence. But, we lack these training data, thanks to the lack of “shooting crimes” in real life! Instead, we can use the movie’s footage, but it could raise a threat that some simulated violent content also be inadvertently blocked as well.

Actually, the automated systems of Facebook are doing so well with human moderating assistants. Then, every trouble happened are all resolved by a human, manually! But, remember, even the human moderator has problems of its own.

We all want, someday, AI or machine learning can help us solve a lot of problems, even with terrorism. However, it seems like only the one who is directly working with AI know exactly how impossible it is to return on such a massive expectation from human!

Comments

Sort by Newest | Popular

Next Story