Skip to content

Twitter staff cuts spawned a deluge of spam porn that drowned out Chinese protest news

    Twitter staff cuts spawned a deluge of spam porn that drowned out Chinese protest news

    Widespread protests erupted across China this weekend in what amounted to “the largest demonstration of opposition to the ruling Communist Party in decades,” AP News reported. Many protesters attempted to capture events live to spread awareness and encourage solidarity on Twitter. The demonstrations were so vigorous that Chinese authorities appeared to be giving in to the protesters’ demands by easing the tight lockdown restrictions that led to the protests.

    This could have been a moment that showed how Twitter under Elon Musk is still a relevant source of news, still a place where free speech demonstrations reach the masses, and thus still the only place to follow escalating protests like this. Instead, The Washington Post reported that a deluge of “useless tweets” effectively buried live footage of protests. This prevented users from easily following the protest news, while Twitter seemingly did nothing to stop what researchers described as an apparent Chinese influence operation.

    For hours, these tweets posted Chinese city names where protests took place in posts mainly promoting pornography and adult escort services. And it worked, preventing users trying to search for place names in Chinese from easily seeing updates about the protests. Investigators told The Post that the tweets came from a series of Chinese-language accounts that hadn’t been used for months, if not years. The tweets began appearing early Sunday, shortly after protesters began calling on Communist Party leaders to resign.

    Examples of tweets can be seen here.

    Investigators fast took note of the suspected Chinese-influenced operation very early on Sunday. Some contacted Twitter directly. Eventually, a third-party researcher managed to reach a current Twitter employee, who confirmed that Twitter was working to resolve the issue. However, experts told The Post that Twitter’s solution only seemed to mitigate the problem, not solve it completely. Stanford Internet Observatory Director Alex Stamos told The Post his team has continued to investigate the scope and impact of the operation.

    Stamos did not immediately respond to Ars’ request for comment. Twitter reportedly lacks a communications team.

    A former Twitter employee told The Post that what Stamos’ team observed is a common tactic used by authoritarian regimes to restrict access to news. Normally, Twitter’s anti-propaganda team would have manually deleted the accounts, the former employee said. But like many teams hit by Twitter layoffs, firings, and firings, that team has been heavily downsized.

    β€œAll Chinese influence operations and analysts at Twitter have all resigned,” the former Twitter employee told The Post.

    Supervision increases when content is automatically removed

    In cutting back on content moderation teams, Musk seems to be relying mostly on automated content takedowns to spot violations that previous employees had checked manually. It has become a problem that extends beyond China. Also this weekend, French regulators said they have become doubtful that Twitter can effectively stem the spread of misinformation, and the New Zealand government had to step in and contact Twitter directly when Twitter failed to release banned images of the terror attack. in Christchurch, New Zealand.

    A spokesperson for New Zealand Prime Minister Jacinda Ardern told The Guardian that “Twitter’s automated reporting feature did not pick up the content as harmful.” Apparently, the entire Twitter team that New Zealand had been expected to work with in blocking such extremism-related content was fired.

    Now Ardern’s office says “only time will tell” whether Twitter is truly committed to removing harmful content, and other governments worldwide seem to agree. Today, Arcom, France’s communications regulatory agency, told Reuters that “Twitter has shown a lack of transparency in its fight against misinformation.”

    According to European Union data reviewed by AP, Twitter had already slowed down over the past year in removing hate speech and misinformation, even before Musk took over. But it is Musk who will have to respond to governments trying to ensure that Twitter’s content moderation will actually work to prevent extremism and disinformation campaigns from spreading online and causing real harm.

    By mid-2023, Musk will feel more pressure to respond to concerns from countries in the EU, where stricter rules protecting online safety will soon be introduced. If he doesn’t, he risks fines of up to 6 percent of Twitter’s global revenue, the AP reported.

    For now, however, according to the AP, Musk is actually doing the opposite of what online security experts want. As Musk grants “amnesty” to suspended Twitter accounts, experts told the AP they predict misinformation and hate speech will only increase on the platform.

    These experts included members of Twitter’s Trust and Safety Council, who confirmed that the group has not met since Musk took over and appears uncertain whether there will be a meeting scheduled for mid-December. So far, when making decisions about bringing back suspended accounts, Musk seems to favor Twitter polls over relying on expert opinion. One council member, Danielle Citron, a cyber civil rights expert from the University of Virginia, told the AP that “the whole point of the permanent suspension is that these people were so bad they were bad for the company.”

    Ars couldn’t immediately reach Citron for comment, but she told the AP that, like rumors that Twitter could break at any moment, Musk’s amnesty for suspended accounts is another “disaster waiting to happen.”