Top Five AI Trends To Be Concerned About

Dhir Acharya - Dec 10, 2018


Top Five AI Trends To Be Concerned About

AI is improving our lives in many ways, but at the same time, it is posing risks.

In a new report, top researchers warn us about the currently rapid, uncontrollable growth of artificial intelligence. It’s not like an AI robot like Skynet will appear and kill us all, but governments and tech firms are taking advantage of machine learning technology to look closer and invade further in our lives.

On Thursday, there was a report on this matter released by the AI Now Institute, the entity closely related to New York University and the home to the highest-profile AI researchers with Microsoft and Google. The report focuses on where artificial intelligence is in 2018 as well as the disturbing trends going on in the AI field.

According to the report, artificial intelligence, including automated systems, machine learning, etc., is growing much faster than our regulatory system does. And it is likely that all power is going to the hands of tech firms and oppressive governments, who take advantage of advanced technology to make us less resistant to their capacities for surveillance, biases, and broad dysfunction.

In the report, top researchers suggest the 10 most potentially destructive trends to politicians. In the first one, they say that AI technologies need overseeing, auditing and monitoring by the government.

It’s worth reading the entire report by yourself, but we would like to list out here the five most concerning trend in artificial intelligence.

  1. The AI system developers have fewer and fewer responsibilities for us, who are directly affected by their technology. It doesn’t matter if you like your data to be exploited by AI or not. According to the report, the recourse we can rely on to tackle this problem with machine learning systems seems to be shrinking.
  2. AI is watching us closer and closer, in a creepy way. You think that facial recognition is already disturbing enough? Think again. Affect recognition is even less scrupulous, it’s basically the modern version of phrenology.
  3. Governments are using AI to make autonomous decisions with an excuse for saving costs, but it’s more like they’re messing up. Used in online application processes, ADS systems can sort eligible candidates out. But abusing this kind of systems allows artificial intelligence to worsen bias as well as reject applicants erroneously on baseless grounds.
  4. Companies are already testing AI under no control. The report emphasizes Silicon Valley’s reputation for its mindset of “move fast and break things.” Hence, companies like Facebook conduct public tests on AI systems or even release them widely to users without sufficient supervision.
  5. Companies appear to fail in fixing their problematic or biased AI platforms. Google drew attention for its announcement to address the ethics problem of machine learning. However, it is pretty naive of engineers to think that they can fix engineering issues with even more engineering. The report argues that the key here is to understand much more deeply the social and historical background on which the artificial intelligence is trained.
Tags

Comments

Sort by Newest | Popular