Google Researchers Are Forced To Hide The Real Nature Of AI
Dhir Acharya
Google's parent company, Alphabet, has been asking its scientists to make AI technology look more positive in their research papers.
- 4 Ways AI Could Change The Mobile Gaming Industry
- Google To Support 1 Million Women Entrepreneurs In Rural India
- Google May Be Working On A Foldable Phone That Looks Like This
According to a Wednesday report from Reuters, Alphabet, Google’s parent company, has been asking its scientists to make AI technology look more positive in their research papers.
Reportedly, a new procedure is being used that requires researchers to consult with Google’s policy, legal, or PR teams for a “sensitive topics review” before exploring areas such as face analysis, along with gender, racial, and political affiliation.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues.”
An internal correspondence told Reuters that the company told authors to pay attention to striking a positive tone.
Earlier this month, Sundar Pichai, Google’s Chief Executive Officer, apologized for improperly handling the departure of AI researcher Timnit Gebru from the firm and said that it would investigate the incident. The researcher left Google on December 4, who said that she was forced to leave the firm over an email she sent to co-workers.
In the email, she criticized the Diversity, Equity, and Inclusion operation at Google, as reported by Platformer that posted the entire email. She wrote that she had been asked to retract a research paper she had been working on following the feedback she received.
“You're not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.”
She added:
“Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of 'responsible AI' adds so much salt to the wounds.”
NAUWU, or “nothing about us without us,” is an idea that policies should not be made without the input from whom they affect.
Google did not respond immediately to a comment request.
>>> Delhi Police Can Hack Your Smartphone To Extract Data Even If It's Locked