New AI Hides Your Emotions From Other AI Systems

Kumari Shrivastav


Smart speaker devices are infringing on our privacy by detecting our emotions. Therefore, researchers are coming up with a solution to protect users.

When people communicate, they will present a variety of nonverbal feelings such as exasperated groans or terrified shrieks. From arousal to disgust, agony to ecstasy, several subtle emotions can be told from vocal cues and inflections. Just a simple saying can deliver lots of meanings. And this personal data would be so valuable when being collected by any company.

Now your emotions via speech can be read by AI voice recognition abilities

When a person speaks to voice assistants which are connected to the Internet, his speech is likely recorded. As a result, researchers from the Imperial College London have created a layer between users and the cloud that their personal information is uploaded to. This AI helps users hide their emotions by translating sentimental speech into unemotional speech automatically. The research paper called “Emotionless: Privacy-Preserving Speech Analysis for Voice Assistant” has been published. You can find it on the arXiv preprint server.

Amazon and other companies are always putting efforts into developing AI’s emotional detecting capabilities. Different personal information such as age, physical condition, gender, stress levels, confidence, etc., can be revealed from our voices, and smart speaker providers don’t miss this point.

According to Ranya Aloufi, the leader of the research, by pinning down human’s emotional states as well as personal preferences, this accurate emotional reader may infringe our privacy significantly.

Your speech shows more personal data than you think

Researchers stated in an experiment that their method helps to decrease the emotional detecting by 96% despite the reduction of speech recognition accuracy with 35% of word error rate. The method they use to mask emotion includes gathering speech, analyzing data, then extracting any emotional factors from the core signal. After that, an AI program studies on these indicators, replacing any emotional signal in the speech, then flatten them. Lastly, with the help of AIs outputs, your normalized speech is regenerated by a voice synthesizer. The cloud will store the data.

What makes a machine human-like is understanding different emotions, which is many futurists and companies' longtime goal. In recent years, AI speech engineers consider emotionality as a critical goal. SER (Speech Emotion Recognition), which first appeared in the late 1990s, is an older field than any famous AI speech recognition system on Google Home, Siri, or Alexa gadgets.

Amazon is developing accurate emotion detecting abilities on its smart home devices

In 2018, Huawei stated that it was developing an AI for detecting emotions to deploy to is voice assistant programs. The Huawei vice president of software engineering said to CNBC:

Back in May, speech engineers of Alexa revealed research where adversarial networks were used to detect emotions in the digital assistant. They wrote in their research that emotion

Those firms also hope the highly sensitive data can help them sell more specifically-targeted ads.

In the patent of emotional speech detection in 2017, Amazon uses sickness as an instance:

Somehow, this act is compromising your privacy significantly

After receiving the cues, a virtual assistant, Alexa for example, will use them for offering extremely specific advertisements of medications and other particular products, in combination with your purchase and browsing history. Aloufi said that consumers can’t do much to protect their personal data from being used like this unless more privacy protections are provided by smart home gadgets such as Google Home and Alexa. She shared:

Next Story