Skip to content

Elon Musk fired Twitter’s ‘ethical AI’ team

    As more and more issues with AI surfaced, including biases about race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

    Twitter’s META unit has been more progressive than most in publishing details about issues with the company’s AI systems and allowing outside researchers to examine the algorithms for new problems.

    Last year, after Twitter users noticed that a photo cropping algorithm appeared to favor white faces when choosing to crop images, Twitter made the unusual decision to let its META unit publish details about the bias. who discovered it. The group also launched one of the first-ever “bias bounty” competitions, which allowed outside researchers to test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintended political bias on Twitter, showing how right-wing news sources were, in fact, promoted more than left-wing.

    Many outside researchers saw the layoffs as a blow not only to Twitter, but also efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter.

    Twitter content

    This content can also be viewed on the site it arises from from.

    “The META team was one of the few good case studies of a technology company running an AI ethics group that interacts with the public and academia with substantial credibility,” said Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

    Alkhatib says Chowdhury is incredibly thoughtful within the AI ​​ethics community and her team has done a really valuable job holding Big Tech to account. “There aren’t many ethics teams worthy of being taken seriously,” he says. “This was one of those whose work I taught classes.”

    Mark Riedl, a professor who studies AI at Georgia Tech, says the algorithms used by Twitter and other social media giants have a huge impact on people’s lives and should be studied. “Whether META had any impact within Twitter is hard to tell from the outside, but the promise was there,” he says.

    Riedl adds that allowing outsiders to investigate Twitter’s algorithms was an important step toward greater transparency and understanding of issues surrounding AI. “They became a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The META researchers had excellent credentials with a long history of studying AI for social wellbeing.”

    As for Musk’s idea to open source the Twitter algorithm, the reality would be much more complicated. There are many different algorithms that affect the way information is surfaced, and understanding them without the real-time data they get in terms of tweets, views, and likes is challenging.

    The idea that there is one algorithm with an explicit political leaning would oversimplify a system that can contain more insidious biases and problems. Exposing these is exactly the kind of work Twitter’s META group did. “Not many groups thoroughly study the biases and flaws of their own algorithms,” said Alkhatib of the University of San Francisco. “META has done that.” And now it doesn’t.