For years activists and academics have criticized the facial analysis software it claims to be able to identify the ageil sex and it Emotional state of a person. They called it partial, unreliable and even invasive. Now, however, it is Microsoft to have decided to eliminate these features from his AI service for facial recognition. A choice that is part of the company’s drive to impose stricter controls on its Artificial Intelligence products. So let’s go find out something more.
Facial recognition: Microsoft eliminates the detection of age, gender and emotions
After two years of work, a Microsoft team developed the “Responsible AI Standard“, That is a document that establishes i eligible requirements of an AI, to make sure it doesn’t have a detrimental impact on society. This means that devices must provide “viable solutions to the problems they are designed to solve”. And “a similar quality of service for identified demographic groups, including marginalized groups”. Before being released, therefore, these Microsoft technologies are subject to a review by the team led by Natasha Crampton, chief in charge of AI. In short, the technological giant has built a very accurate control system, which has questioned some of the systems launched in the past.
In particular, Microsoft was primarily concerned with its emotion detection, which labels a subject’s expression as anger, contempt, disgust, fear, happiness, neutrality, sadness, or surprise. “There is a tremendous amount of cultural, geographic and individual variation in the way we express ourselves,” comments Crampton. And this has led to major reliability issues of this specific feature of Microsoft’s facial recognition AI. Then assuming that “facial expression is a reliable indicator of your internal emotional state”. Therefore, the company decided to remove the detection of emotions, along with that of age and gender – mainly because this system was binary, and not compatible with the values of the company -.
But that is not all. Apparently, Microsoft will add new controls to its function facial recognition. Therefore, from now on, software developers who want to use the system must request access to it and explain how they intend to implement it. And the same goes for users, who will have to explain how they want to use AI-based systems that can be potentially harmful, such as the “personalized neural voice”. “We are taking concrete steps to live up to the principles of our AI,” Crampton says.
What happens when AI becomes dangerous for users
It is hardly a secret that hi-tech companies have already had problems managing AI in the past. In 2016, Microsoft launched the chatbot “Tay”, designed to learn “conversation understanding” from the users it interacted with. Before long the bot began to launch racist tweets and offensive, forcing the company to permanently withdraw it. Then, in 2020, researchers found that speech synthesis tools developed by Microsoft, Apple, Google, IBM and Amazon worked less well for blacks. Microsoft’s system was the best of the bunch, but it misidentified 15 percent of words for whites, compared to 27 percent for blacks.
In short, it is undeniable that AI can be harmful to society. And this has been the central theme of the sector debate for many years. The Dutch tax authorities mistakenly subtracted childcare subsidies from needy families because a flawed algorithm penalized dual nationals. And more recently, several black men were wrongfully arrested for flawed facial recognition matches. And these are just some of the situations that led Microsoft to choose to review the control over its Artificial Intelligence systems. In the hope that soon a suitable solution can be found to use them without being invasive or dangerous.