Skip to content

Microsoft plans to eliminate facial analytics tools in push for ‘responsible AI’

    For years, activists and academics have expressed concern that facial analysis software that claims to identify a person’s age, gender and emotional state may be biased, unreliable or invasive — and should not be sold.

    Microsoft acknowledged some of those criticisms, saying on Tuesday it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will no longer be available to new users this week and will be phased out for existing users within the year.

    The changes are part of Microsoft’s move to tighten controls on its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that outlines the requirements for AI systems to ensure they will not have a harmful impact on society.

    The requirements include ensuring that systems provide “valid solutions to the problems they are designed to solve” and “comparable quality of service for identified demographics, including marginalized groups.”

    Before being released, technologies that would be used to make important decisions about a person’s access to work, education, health care, financial services or any opportunity at life are subject to an assessment by a team led by Natasha Crampton, the AI ​​responsible. officer of Microsoft .

    There were heightened concerns at Microsoft about the emotion recognition tool, which labeled a person’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.

    “There is a tremendous amount of cultural, geographic and individual variation in the way we express ourselves,” said Ms Crampton. That led to reliability concerns, along with the larger questions of whether “facial expression is a reliable indicator of your internal emotional state,” she said.

    The age and gender analysis tools being eliminated — along with other tools to detect facial features such as hair and smile — could be useful for interpreting visual images for the blind or partially sighted, for example, but the company decided it was problematic to use the profiling tools. that are generally available to the public, Ms Crampton said.

    In particular, she added, the system’s so-called gender classification was binary, “and that’s not consistent with our values.”

    Microsoft will also be putting new checks on its facial recognition feature, which can be used to perform identity checks or search for a specific person. For example, Uber uses the software in its app to verify that a driver’s face matches the ID registered for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool must request access and explain how they want to use it.

    Users must also apply and explain how they will use other potentially abusive AI systems, such as Custom Neural Voice. The service can generate a human voiceprint based on a sample of someone’s speech, so authors can, for example, create synthetic versions of their voice to read their audiobooks in languages ​​they don’t speak.

    Because of the potential abuse of the tool — to give the impression that people have said things they haven’t said — speakers must go through a series of steps to confirm that their voices are allowed, and the recordings contain watermarks provided by Microsoft. can be detected.

    “We are taking concrete steps to live up to our AI principles,” said Ms. Crampton, who spent 11 years as a lawyer at Microsoft and joined the AI ​​ethics group in 2018. “It will be a huge journey.”

    Microsoft, like other tech companies, has had problems with its artificially intelligent products. In 2016, it released a chatbot on Twitter called Tay, which was designed to learn “conversational understanding” from the users it interacted with. The bot soon started spouting racist and abusive tweets, and Microsoft had to remove it.

    In 2020, researchers found that speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon did not work as well for black people. Microsoft’s system was the best of the bunch, misidentifying 15 percent of words for white people, compared to 27 percent for black people.

    The company had collected various speech data to train its AI system, but hadn’t understood how diverse language could be. So it hired a sociolinguistic expert from the University of Washington to explain the language varieties Microsoft needed to know. It went beyond demographics and regional variation in how people speak in formal and informal situations.

    “If you think race is a determinant of how someone speaks, that’s actually a little misleading,” Ms Crampton said. “What we learned in consultation with the expert is that, in fact, a huge range of factors influence language variation.”

    Ms Crampton said the journey to resolve that disparity between speech and text had helped inform the guidelines set forth in the company’s new standards.

    “This is a critical normative period for AI,” she said, pointing to the proposed European regulation that will set rules and limits on the use of artificial intelligence. “We hope with our standard to contribute to the clear, necessary discussion that needs to be had about the standards that technology companies must adhere to.”

    There has been a lively debate for years about the potential harm of AI in the technology community, fueled by mistakes and errors that have real impacts on people’s lives, such as algorithms that determine whether or not people get benefits. The tax authorities wrongly took child benefits from needy families when a flawed algorithm punished people with dual nationality.

    Automated face recognition and analysis software is particularly controversial. Last year, Facebook shut down its decade-old system for identifying people in photos. The company’s vice president of artificial intelligence cited the “many concerns about the place of facial recognition technology in society.”

    Several black men have been wrongfully arrested after flawed facial recognition matches. And in 2020, at the same time as the Black Lives Matter protests following the George Floyd police murder in Minneapolis, Amazon and Microsoft imposed moratoriums on the use of their facial recognition products by police in the United States, with clearer laws on the use needed. goods.

    Since then, Washington and Massachusetts have passed regulations requiring, among other things, judicial oversight over police use of facial recognition tools.

    Ms. Crampton said Microsoft had considered making its software available to law enforcement in states with laws on the books, but had decided against doing so for now. She said that could change as the legal landscape changed.

    Arvind Narayanan, a Princeton professor of computer science and prominent AI expert, said companies may be moving away from facial-analysis technologies because they were “more visceral, as opposed to several other types of AI that may be questionable, but we’re not.” necessarily feel in our bones.”

    Companies may also realize that, at least for now, some of these systems are not as commercially valuable, he said. Microsoft couldn’t say how many users it had for the facial analytics features it’s removing. Mr Narayanan predicted that companies would be less likely to forgo other invasive technologies such as targeted advertising, where people are profiled to pick the best ads to show them, because they are a “cash cow.”