Skip to content

Racist and violent ideas jump from the edge of the web to mainstream sites

    On March 30, the young man accused of the mass shooting at a Tops supermarket in Buffalo surfed a wealth of racist and anti-Semitic websites. On BitChute, a video-sharing site known for hosting right-wing extremism, he listened to a talk about the decline of the American middle class by a Finnish extremist. On YouTube he found a gruesome video of a car driving through black neighborhoods in Detroit.

    Over the course of the week that followed, he showed off his writing online, hanging out in covert chat rooms on Reddit and 4chan, as well as reading articles about racing in HuffPost and Medium. He watched news reports on local television about heinous crimes. He switched between ‘documentaries’ on extremist websites and weapon tutorials on YouTube.

    The young man, who was charged by a grand jury last week, has been portrayed by authorities and some media as an outcast outcast who acted alone when he killed 10 black people in the supermarket and injured three more. He even lived in numerous online communities where he and others consumed and shared racist and violent content.

    As the number of mass shootings escalates, experts say many of the disturbing ideas fueling the atrocities are no longer being relegated to a handful of hard-to-find dark corners of the web. More and more outlets, both fringe and mainstream, are hosting bigoted content, often in the name of free speech. And the inability – or unwillingness – of online services to contain violent content threatens to draw more people to hate speech.

    Many images and text the young man had in his extensive writings, including a diary and a 180-page “manifesto,” have been circulating online for years. Often they have infiltrated some of the world’s most popular sites, such as Reddit and Twitter. His path to radicalization, illustrated in these documents, reveals the limits of the efforts of companies like Twitter and Google to moderate posts, images and videos promoting extremism and violence. Enough of that content remains that it could open a pipeline for users to find more extreme websites just a click or two away.

    It’s pretty prolific on the Internet,” said Eric K. Ward, a senior fellow at the Southern Poverty Law Center who is also executive director of the Western States Center, a nonprofit research organization. “It doesn’t just fall into your lap; you have to go look for it. But once you start looking for it, the problem is that it starts to rain on a person in abundance.”

    The Buffalo attack has refocused attention on the role that social media and other websites continue to play in violent extremism, with criticism from both the public and government officials.

    “The fact that this act of barbarity, this execution of innocent people, can be streamed live on social media platforms and cannot be deleted in a second, says to me that there is a responsibility,” said New York Governor Kathy Hochul. after the Buffalo shooting. Four days later, the state’s Attorney General, Letitia James, announced that she had begun an investigation into the role of the platforms.

    Facebook pointed out its rules and policies that ban hateful content. In a statement, a spokeswoman said the platform detects more than 96 percent of content related to hate organizations before reporting it. Twitter declined to comment. Some of the social media posts on Facebook, Twitter, and Reddit that The New York Times identified through reverse image search have been removed; some accounts sharing the images were suspended.

    The man charged with the murders, Payton Gendron, 18, described his attack on Discord, a chat app that emerged from the video game world in 2015, streaming live on Amazon-owned Twitch. The company managed to remove his video within two minutes, but many of the sources of misinformation he cited remain online even now.

    His paper trail provides a chilling glimpse into how he planned a deadly attack online, collecting tips on weapons and tactics and taking inspiration from fellow racists and past attacks that he largely mimicked with his own attacks. Overall, the content presented a distorted and racist view of reality. The shooter saw the ideas as an alternative to the prevailing views.

    “How do you avoid a shooter like me, you ask?” he wrote on Discord in April, more than a month before the shooting. “The only way is to prevent them from learning the truth.”

    His writings map in detail the websites that motivated him. Much of the information he gathered in his writings related to links or images he chose to match his racist views, a reflection of the kind of online life he led.

    By his own admission, the young man’s radicalization began not long after the start of the Covid-19 pandemic, when he was largely confined to his home like millions of other Americans. He described getting his news mainly from Reddit before joining 4chan, the online message board. He followed topics about guns and the outdoors before finding another devoted to politics, eventually settling in a place where a poisonous hodgepodge of racist and extremist misinformation was possible.

    While he frequented sites like 4chan that are known to be on the fringe, he also spent a lot of time on mainstream sites, according to his own record, most notably YouTube, where he found graphic scenes of police cameras and videos of gun tips and tricks. . As the day of the attack neared, the gunman watched more YouTube videos of mass shootings and police officers involved in gunfights.

    YouTube said it had watched all the videos that appeared in the diary. Three videos were removed for being linked to websites that violate YouTube’s firearms policy, which “prohibits content intended to teach viewers how to make firearms, make accessories that convert a firearm to automatic fire, or live streaming content that shows someone is wielding a firearm,” said Jack Malon, a YouTube spokesperson.

    Central to the shooting, like others before, was a false belief that an international Jewish conspiracy plans to replace white voters with immigrants who will gradually take over political power in America.

    The conspiracy, known as the “great replacement theory,” has roots that go back at least to the tsarist Russian anti-Semitic hoax called “The Protocols of the Elders of Zion,” which claimed to be a Jewish plot to overtake Christianity in Europe. .

    It recently resurfaced in the work of two French novelists, Jean Raspail and Renaud Camus, who, four decades apart, imagined waves of immigrants taking power in France. It was Mr Camus, a socialist turned far-right populist, who popularized the term “le grand remplacement” in 2011 in a novel of that name.

    Mr. Gendron, according to the documents he posted, did not appear to have read any of them; instead, he attributed the idea of ​​”great replacement” to the online writings of the gunman who murdered 51 Muslims in two mosques in Christchurch, New Zealand, in 2019.

    Following that attack, New Zealand Prime Minister Jacinda Ardern led an international pact called the Christchurch Call, in which the government and major tech companies committed to eliminating terrorist and extremist content online. While the agreement contained no legal sanctions, the Trump administration declined to sign, citing the principle of free speech.

    The online experience of Mr. Gendron shows that the writings and video clips related to the Christchurch shooting remain available as inspiration for other racially motivated acts of violence. He repeatedly referred to both.

    The Anti-Defamation League warned last year that the “great replacement” had moved from fringe white supremacist beliefs to the mainstream, pointing to the chants of protesters at the 2017 “Unite the Right” rally in Charlottesville, Virginia, that erupted into violence and Tucker Carlson’s comments on Fox News.

    “Most of us don’t know the original story,” said Mr. Ward of the Southern Poverty Law Center. “What we know is the story, and the story of the great replacement theory has been so accredited by elected officials and personalities that the origin of the story no longer needs to be told. People are just beginning to understand it as if they could understand conventional wisdom. And that’s the scary thing.”

    Despite all the efforts some major social media platforms have made to moderate online content, the algorithms they use — often designed to show users messages they’ll read, view, and click — can accelerate the spread of disinformation and other malicious content.

    Media Matters for America, a liberal-oriented nonprofit, said last month that its researchers had found at least 50 ads on Facebook in the past two years promoting aspects of the “great replacement” and related themes. Many of the ads came from candidates for political office, although the company, now known as Meta, announced in 2019 that it would be banning white nationalist and white separatist content from Facebook and Instagram.

    The organization’s researchers also found that 907 posts on the same topics on right-wing sites attracted more than 1.5 million engagements, far more than posts intended to debunk them.

    Although Mr Gendron’s video of the shooting was removed from Twitch, it resurfaced on 4chan even while he was still at the crime scene. The video has since spread to other fringe platforms like Gab and eventually mainstream platforms like Twitter, Reddit and Facebook.

    The advent of social media has enabled nefarious ideas and conspiracies that once simmered in relative isolation to spread through society and bring together people animated by hate, said Angelo Carusone, the president of Media Matters. for America.

    “They’re not isolated anymore,” he said. “They are connected.”