Skip to content

Disinformation researchers are concerned about the consequences of the judge’s order

    A federal judge’s decision this week to restrict government communications with social media platforms could have wide-ranging side effects, according to researchers and groups fighting hate speech, online abuse and disinformation: It could hurt efforts to curb harmful content further hinder.

    Alice E. Marwick, a researcher at the University of North Carolina at Chapel Hill, was one of several disinformation experts who said Wednesday the ruling could hamper work designed to prevent false claims about vaccines and voter fraud from spreading. to spread.

    The order, she said, followed other efforts, largely from Republicans, who are “part of an organized campaign pushing back the idea of ​​disinformation as a whole.”

    Judge Terry A. Doughty issued a preliminary injunction on Tuesday saying the Department of Health and Human Services and the Federal Bureau of Investigation, along with other parts of the government, must stop corresponding with social media companies for the purpose of “urging, encourage, coerce or induce in any way the removal, removal, suppression or reduction of content containing protected freedom of expression.”

    The ruling stemmed from a lawsuit by the attorneys general of Louisiana and Missouri, who accused Facebook, Twitter and other social media sites of censoring right-wing content, sometimes in collusion with the government. She and other Republicans hailed the judge’s move, in the U.S. District Court for the Western District of Louisiana, as a victory for the First Amendment.

    However, several researchers said the government’s work with social media companies wasn’t a problem as long as it didn’t force them to take down content. Instead, they said, the government has historically informed companies about potentially dangerous messages, such as lies about electoral fraud or misleading information about Covid-19. Most disinformation or disinformation that violates social platform policies is flagged by researchers, nonprofits, or people and software on the platforms themselves.

    “That’s the really important distinction here: The government should be able to inform social media companies about things they believe are harmful to the public,” said Miriam Metzger, a communications professor at the University of California, Santa Barbara. , and an affiliate of the Center for Information Technology and Society.

    A bigger concern, researchers said, is a possible hair-raising effect. The judge’s decision blocked certain government agencies from communicating with some research organizations, such as the Stanford Internet Observatory and the Election Integrity Partnership, about removing social media content. Some of those groups have already been targeted by a Republican-led legal campaign against universities and think tanks.

    Their colleagues said such provisions could deter younger scientists from pursuing disinformation research and intimidate donors funding crucial grants.

    Bond Benton, an associate professor of communications at Montclair State University who studies disinformation, described the statement as “a bit of a potential Trojan horse.” It is limited on paper to the government’s relationship with social media platforms, he said, but included the message that misinformation qualifies as speech and its removal as the suppression of speech.

    “Before, platforms could just say we don’t want to host it: ‘No shirt, no shoes, no service,'” said Dr. benton. “This ruling will probably make platforms a little more careful about that now.”

    In recent years, platforms have relied more on automated tools and algorithms to spot malicious content, limiting the effectiveness of complaints from people outside the companies. Academics and anti-disinformation organizations often complained that platforms didn’t respond to their concerns, said Viktorya Vilk, the director of digital security and free speech at PEN America, a nonprofit that supports free speech.

    “Platforms are very good at ignoring civil society organizations and our requests for help or requests for information or escalation of individual cases,” she said. “They’re less comfortable ignoring the government.”

    Several disinformation researchers feared the ruling could provide coverage for social media platforms, some of which have already scaled back their efforts to curb misinformation, to be even less vigilant ahead of the 2024 election. They said it was unclear how relatively new government initiatives that met the concerns and suggestions of researchers, such as the White House Task Force to address online harassment and abuse, would perish.

    For Imran Ahmed, the CEO of the Center for Countering Digital Hate, Tuesday’s decision underlined other issues: the United States’ “particularly fangless” approach to dangerous content compared to places like Australia and the European Union, and the need to rules for social media platform liability to be updated. Tuesday’s ruling cited that the center had made a presentation to the surgeon general’s office about its 2021 report on online anti-vaccine activists, “The Disinformation Dozen.”

    “You can’t show a nipple at the Super Bowl, but Facebook can still broadcast Nazi propaganda, empower stalkers and bullies, undermine public health and enable extremism in the United States,” said Mr. Ahmad. “This court ruling exacerbates the sense of impunity faced by social media companies, despite the fact that they are the main vector for hate and disinformation in society.”