Technology

Microsoft is Ditching AI-Powered Emotion Recognition Tech

Published

on

Microsoft is phasing out its AI emotion recognition system from public access. This creepy AI-powered emotion recognition tech can identify human emotions and identity attributes. Imagine your emotions being categorized by an AI-powered emotion recognition system? It seems disturbing not only from an interface standpoint but does this Microsoft emotion recognition tech do the job accurately? Many experts, including the tech giant, think it’s ethically dubious.

Privacy advocates celebrate as Microsoft announces retracting the emotion recognition system from Azure Face facial recognition services. Moreover, the company will also put a stop to all capabilities that infer identity attributes such as age and gender. 

Ethics policies

This decision is all part of Microsoft’s ethics policies restructuring. Microsoft’s Chief Responsible AI Officer Natasha Crampton said this is in response to some experts. They claim that the emotion recognition tech has a “lack of consensus” on the actual definition of human emotion. 

Also, the company put a lot of thought into what experts claim is an “overgeneralization” of how these AI-powered systems will classify emotions. 

Sarah Bird, Azure AI Principal Group Product Manager, said, “We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs.”

She also said in a separate statement, “API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.”

Bird said the company is doing this to “mitigate risks.” 

Azure customers cannot download or access the AI recognition system. However, current customers can use the service until 2023 before it’s discontinued. 

Although the company is discontinuing API use for the general public, she said that Microsoft will still experiment with how to make this technology better. It will have limited capabilities, for people with disabilities. 

She said these capabilities are helpful when utilized for controlled accessibility scenarios. This also aligns with the newly revised policy with 27 pages, Responsible AI Standard, which has undergone several revisions in a year.

A crude technology

Many tech experts had called Microsoft emotion recognition tech unimaginative. Albert Fox Cahn, Surveillance Technology Oversight Project Executive Director, said it’s a “no-brainer” for Microsoft to ditch its emotion recognition system. 

Cahn said, “The truth is that the technology is crude at best, only able to decipher a small subset of users at most.” 

“But even if the technology were improved, it would still penalize anyone who’s neurodivergent. Like most behavioral AI, diversity is penalized, and those who think differently are treated as a danger,” he added.

Jay Stanley, ACLU Senior Policy Analyst, said this technology shouldn’t be relied upon, let alone deployed. Stanley identified the shortcomings that this emotion recognition tech has. She also said that Microsoft is a respected tech company shrewd enough to determine the implications of this technology.

Microsoft’s experimentation with emotion recognition happened two years after it united with IBM and Amazon in banning facial recognition. The mistrust and misuse of these systems plague big tech companies, including Microsoft. 

Although this decision was well-embraced by privacy groups, Cahn hopes that the tech giant would also take further actions regarding its other technologies with similar concerns. 

And for other news, read more here at Owner’s Mag!

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version