Scientists Want To Teach AI And Robots To Not Misbehave
Anil Singh - Nov 27, 2019
Let’s imagine that someday, AI and robots are not meant to treat the human well anymore, can we deal with the fear of being hopelessly left behind all those advancements?
- 4 Ways AI Could Change The Mobile Gaming Industry
- Cafe In Tokyo With Robot Waiters Controlled By Disabled Staff
- This South Korean YouTuber Is The Result Of Deepfake Technology
Many believe that artificial intelligence could be out of human’s control since it showed surprising abilities in many fields. With the integration of machine learning and deep learning, an AI can decode a Rubik cube in a timely manner with just one hand, stimulate our Universe, and even find out myths hidden in the past. Let’s imagine that someday, AI and robots are not meant to treat the human well anymore, can we deal with the fear of being hopelessly left behind all those advancements?
In fact, some recent studies indicated numerous levels of bias of algorithmic systems such as predicting criminality based on racial lines or calculating credit limits with gender identification. Scientists now want to fight back against potential threats. A team of researchers from the Univ. of Massachusetts Amherst comes up with an idea of a framework that could prevent the “undesirable behavior” made by intelligent machines.
The framework, while making it easier for AI researchers to work on new algorithms of machine learning, also helps them to solve undesirable behaviors with regulations during the time they design core algorithms. In a word, this doesn’t imbue an intelligent machine with any inherent understanding about fairness or even morality. Algorithms called “Seldonian” are touted to be the heart of this new system. To shoot down the gender bias in common algorithms, the team also build a new algorithm to predict the performance of students, irrespective of whether they’re males or females.
With the new method, researchers expect to give AI scientists a hand in solving AI’s misbehaviors in the future.
Comments
Sort by Newest | Popular