The Site Determines If Texts Were Written By A Human Or A Bot

Anita - Mar 12, 2019


The Site Determines If Texts Were Written By A Human Or A Bot

A new algorithm named GLTR has been created to detect whether words were written by a human or a bot in the fight against fake news and misinformation.

OpenAI’s developers confirmed last month that an algorithm for text generating named GPT-2 had been successfully created, which they said to be extremely dangerous to launch into our world because it is probably used to contaminate the global web with interminable material written by a computer program.

However, recently, a scientist team from Harvard University and MIT-IBM Watson AI Lab created another algorithm named GLTR which can determine the possibility that any specific text passage could be written by algorithms such as GPT-2, which is a very interesting escalation in the fight against spam.

News Writing Bot

The algorithm will determine the possibility that the words could be written by a bot like GPT-2 and highlight them with different colors

The developers demonstrated the way GPT-2 is probably used to write news articles that are fictitious but convincing by sharing one that the tool had written about researchers discovering unicorns when OpenAI released this algorithm.

GLTR takes advantage of the same examples to read the last output and forecast whether GPT-2 or a human wrote it. Because GPT-2 writes the sentences by forecasting which words have to come with each other, the GLTR decides if a sentence has the word which a bot writing fake news would have chosen.

The scientists from Harvard, MIT, and IBM working on the project created a website which lets people check GLTR by themselves. The algorithm will determine the possibility that the words could be written by a bot like GPT-2 and highlight them with different colors. For instance, if the words are highlighted green, it means they are written by GPT-2 while other colors like red, yellow, especially purple mean the words are probably written by a human.

Gltr Test

How the method works

But, Janelle Shane, an AI researcher, realized that GLTR also does not succeed in correctly determining other algorithms for generating text in addition to GPT-2 of OpenAI.

After checking the site with her text generator, the researcher found that GLTR wrongly determined that the resultant words were too hard to predict so that they had to be written by a human, which suggests that we will need more than this tool in the continuing fight against fakes news and misinformation.

Comments

Sort by Newest | Popular

Next Story