Data collected by CyberWell found that while only 2 percent of anti-Semitic content on social media platforms was violent by 2022, 90 percent of that came from Twitter. And Cohen Montemayor notes that even the company’s standard moderation systems would likely have struggled under the pressure of so much hateful content. ‘If you experience peaks [of online hate speech] and you haven’t changed the content moderation infrastructure, which means you’re leaving more hate speech on the platform,” she says.
Civil society organizations that used to have a direct line with Twitter’s moderation and policy teams are struggling to voice their concerns, says Isedua Oribhabor, Business and Human Rights Lead at Access Now. “We’ve seen the platform fail in those respects to actually moderate properly and deliver the services the way it used to for its users,” she says.
Daniel Hickey, a visiting researcher at USC’s Information Sciences Institute and co-author of the paper, says Twitter’s lack of transparency makes it difficult to assess whether there was simply more hate speech on the platform, or whether the company made substantive changes. in his policy after the Musk acquisition. “It’s pretty hard to untangle often because Twitter isn’t going to be completely transparent about this stuff,” he says.
That lack of transparency is likely to get worse. Twitter announced in February that it would no longer provide free access to its AP, the tool that allows academics and researchers to download and use the platform’s data. “For researchers who want to get a more comprehensive picture of how hate speech is changing, as Elon Musk has been leading the company for longer, it is certainly much more difficult now,” Hickey says.
In the months since Musk took over Twitter, major public news outlets like National Public Radio, Canadian Broadcasting Company, and other public media outlets have abandoned the platform after being labeled “state-sponsored,” a designation previously used only for Russian, Chinese and Iranian state media. Yesterday, Musk reportedly threatened to reassign NPR’s Twitter account.
Meanwhile, state-sponsored media seem to be thriving on Twitter. An April report from the Atlantic Council’s Digital Forensic Research Lab found that Twitter gained tens of thousands of new followers after Twitter stopped suppressing these accounts.
In December, accounts previously banned were allowed back on the platform, including right-wing academic Jordan Peterson and prominent misogynist Andrew Tate, who was later arrested in Romania for human trafficking. Liz Crokin, a proponent of the QAnon and Pizzagate conspiracy theories, was also reinstated under Musk’s leadership. On March 16Crokin claimed – falsely – in a tweet in which talk show host Jimmy Kimmel had depicted a pedophile symbol in a skit on his show.
Recent changes to Twitter’s verification system, Twitter Blue, where users can pay to get blue ticks and more exposure on the platform, have also added to the chaos. In November, a tweet from a fake account posing as corporate giant Eli Lilly announced that insulin was free. The tweet caused the company’s stock to drop nearly 5 percent. But Ahmed says the implications for pay-to-play authentication are much greater.
“Our analysis showed that Twitter Blue was being used as a weapon, especially by people spreading disinformation,” said CCDH’s Ahmed. “Scientists, journalists, they find themselves in an incredibly hostile environment where their information doesn’t reach the reach enjoyed by bad actors spreading disinformation and hate.”
Despite Twitter’s protests, Ahmed says, the study confirms what many civil society organizations have been saying for months. “Twitter’s strategy in response to all this massive data from different organizations showing it was getting worse was to step up and say, ‘No, we have data showing the opposite.'”