Skip to content

Twitter’s moderation system is in tatters

    “Me and other people who have tried to reach out have come to a dead end,” says Benavidez. “And when we’ve reached out to those who are supposedly still on Twitter, we just don’t get a response.”

    Even when researchers can reach Twitter, responses are slow — sometimes taking more than a day. Jesse Littlewood, vice president of campaigns at the nonprofit Common Cause, says he’s noticed that when his organization reports tweets that clearly violate Twitter’s policies, those posts are now less likely to be removed.

    The volume of content that users and watchdogs may want to report to Twitter is likely to increase. Many of the staff and contractors laid off in recent weeks worked in teams such as trust and safety, policy and civic integrity, all working to keep disinformation and hate speech off the platform.

    Melissa Ingle was a senior data scientist on Twitter’s citizen integrity team until she was fired on Nov. 12 along with 4,400 other contractors. She wrote and reviewed algorithms used to detect and remove political disinformation on Twitter – most recently that meant the US elections. and Brazil. Of the 30 people on her team, only 10 remain, and many of the human content moderators, who review and flag tweets that violate Twitter’s policies, have also been fired. “Machine learning needs constant input and constant care,” she says. “We have to constantly update what we’re looking for because the political discourse is constantly changing.”

    While Ingle’s work didn’t involve interacting with outside activists or researchers, she says members of Twitter’s policy team did. Sometimes information from outside groups helped determine the conditions or content that Ingle and her team trained to identify algorithms. She now worries that there won’t be enough people to make sure the software remains accurate with so many layoffs of staff and contractors.

    “With the algorithm stopped updating and the human moderators gone, there just aren’t enough people to manage the ship,” says Ingle. “My concern is that these filters are getting more and more porous and more and more things are getting through as the algorithms become less accurate over time. And there’s no human being to catch things going through the cracks.

    Within a day of Musk taking ownership of Twitter, Ingle says, internal data showed that the number of abusive tweets reported by users increased by 50 percent. That initial spike ebbed a bit, she says, but reports of abusive content remained about 40 percent higher than usual pre-acquisition volume.

    Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University, also expects Twitter’s defenses against banned content to wither. “Twitter has always struggled with this, but a number of talented teams have made real progress on these issues in recent months. Those teams are now wiped out.”

    Such concerns are echoed by a former content moderator who was a contractor for Twitter until 2020. The contractor, who speaks anonymously to avoid repercussions from his current employer, says all former colleagues doing similar work with whom he had contact have been fired. He expects the platform to become a much less pleasant place to be. “It will be terrible,” he says. “I actively searched the worst parts of Twitter — the most racist, most horrible, most depraved parts of the platform. That will be reinforced.”