Google Brain Built A Translator For AI To Explain Itself

Harin - Jan 17, 2019


Google Brain Built A Translator For AI To Explain Itself

A Google Brain scientist built a tool which can help AI (artificial intelligence) systems explain how they come to their conclusions.

A Google Brain scientist decided to build a tool which can assist AI (artificial intelligence) systems in explaining how they draw their conclusions, a task which has always been considered as a tricky one, especially for machine learning algorithms.

Ai Language Translation Artificial Intelligence De

The tool is named as TCAV, which is short for Testing with Concept Activation Vectors. It can be connected to machine learning algorithms in order to understand how these algorithms considered different types of data and factors before delivering results.

Tools that are similar to the TCAV are in great demand since AI is under the radar for issues related to gender and racial bias.

With TCAV, facial recognition algorithm user would have the capability to determine the role it played in racial equality when evaluating job application or matching people with a database of criminals. Instead of blindly trusting the results the machines deliver to be fair and objective, people can reject, question and even fix those conclusions.

Been Kim, a Google Brain scientist said to Quanta that she does not demand a tool which can explain thoroughly how a decision-making process of AI goes. It is good enough just to have something that can detect potential issues and help us human understand what and where things may have been wrong.

Been Kim Qa 2880x1620 Lede 2880x1625
Been Kim, a Google Brain scientist

Kim then compared the concept with the act of reading the safety labels before using a chainsaw to cut down a tree.

She said:

Quote

Comments

Sort by Newest | Popular

Next Story