Google Brain Built A Translator For AI To Explain Itself
Harin
A Google Brain scientist built a tool which can help AI (artificial intelligence) systems explain how they come to their conclusions.
- 4 Ways AI Could Change The Mobile Gaming Industry
- New ‘Deep Nostalgia’ AI Allow Users To Bring Old Photos To Life
- Pilots Passed Out Mid-Flight, AI Software Got The Aircraft Back Up Automatically
A Google Brain scientist decided to build a tool which can assist AI (artificial intelligence) systems in explaining how they draw their conclusions, a task which has always been considered as a tricky one, especially for machine learning algorithms.
The tool is named as TCAV, which is short for Testing with Concept Activation Vectors. It can be connected to machine learning algorithms in order to understand how these algorithms considered different types of data and factors before delivering results.
Tools that are similar to the TCAV are in great demand since AI is under the radar for issues related to gender and racial bias.
With TCAV, facial recognition algorithm user would have the capability to determine the role it played in racial equality when evaluating job application or matching people with a database of criminals. Instead of blindly trusting the results the machines deliver to be fair and objective, people can reject, question and even fix those conclusions.
Been Kim, a Google Brain scientist said to Quanta that she does not demand a tool which can explain thoroughly how a decision-making process of AI goes. It is good enough just to have something that can detect potential issues and help us human understand what and where things may have been wrong.
Kim then compared the concept with the act of reading the safety labels before using a chainsaw to cut down a tree.
She said: