Skip to content

Facebook failed to stop ads threatening election workers

    Facebook says it won’t allow content that threatens serious violence. But when investigators ran ads around Election Day this year threatening to “lynch,” “kill” and “execute” election workers, the company’s largely automated moderation systems cleared many of them.

    Of the 20 ads submitted by researchers featuring violent content, 15 have been approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers removed the approved ads before they were published.

    Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared to nine out of 10 ads in English.

    TikTok and YouTube have rejected all ads and suspended accounts that attempted to submit them, the researchers said.

    The tests underscored the challenges social networks face in moderating increasingly partisan and violent content surrounding elections, especially as pollsters face additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had increased.

    In a statement, a spokesperson for Meta, Facebook’s parent company, said it later disabled several accounts that submitted the ads. The company also said ads are still being reviewed after they go live.

    “This is a small sample of ads that are not representative of what people see on our platforms,” ​​the spokesperson said in a statement. “Content that incites violence against election workers or anyone else has no place in our apps, and recent reporting has made it clear that Meta’s ability to deal with these issues exceeds that of other platforms.”

    The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted with political disinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that attempted to submit the ads.

    But this round used more overt language that put Facebook’s largely automated moderation system to the test. Those included direct threats of violence, which came from real statements by election deniers and other far-right extremists in the United States. An ad, which was endorsed by Facebook, read: “We’re going to kill you all.” Another ad, threatening to abuse children, was also approved.

    “It was really quite shocking to see the results,” says Damon McCoy, an associate professor at NYU. “I thought a really simple keyword search would have flagged this for manual review.”

    In a statement, researchers also said they wanted social networks like Facebook to increase content moderation efforts and provide more transparency about the moderation actions they take.

    “The fact that YouTube and TikTok managed to detect the death threats and suspend our account, while Facebook allowed most ads to be published, shows that what we are asking is technically possible,” they wrote.