Skip to content

Meta takes all the wrong lessons from X

    “Meta has always been a home for Russian, Chinese and Iranian disinformation,” said Gordon Crovitz, co-CEO of NewsGuard, a company that provides a tool to evaluate the reliability of online information. “Now Meta has apparently decided to open the floodgates completely.”

    Again, fact checking is not perfect; Croviz says NewsGuard has already tracked several “false stories” on Meta's platforms. And the community notes model that Meta will replace its fact-checking battalions with may still be somewhat effective. But research from Mahavedan and others has shown that crowdsourced solutions miss massive amounts of misinformation. And unless Meta commits to maximum transparency in how the version is deployed and used, it will be impossible to know if the systems work at all.

    It's also unlikely that the move to community notes will solve the 'bias' problem that Meta managers are so concerned about outwardly, as it seems unlikely to exist in the first place.

    “The impetus for all these changes in Meta's policies and Musk's acquisition of Twitter is this accusation that social media companies are biased against conservatives,” said David Rand, a behavioral scientist at MIT. “There's just no good evidence for that.”

    In a recently published paper in Nature, Rand and his co-authors found that while Twitter users who used a Trump-related hashtag in 2020 were more than four times more likely to ultimately be suspended than those who used pro-Biden hashtags, they also were much more likely to ultimately be expelled. likely to have shared “low quality” or misleading news.

    “Just because there is a difference in who is being traded does not mean there is bias,” Rand says. “Crowd ratings can reproduce fact-checkers' ratings quite well… You'll still see more conservatives penalized than liberals.”

    And while “There's a reason there's only one Wikipedia in the world,” says Matzarlis. “It is very difficult to get something off the ground on a large scale through crowdsourcing.”

    As for relaxing Meta's Hateful Conduct policy, that itself is an inherently political choice. It is still allowing some things and disallowing others; Pushing those boundaries to enable bigotry doesn't mean they don't exist. It just means that Meta thinks it's better than the day before.

    Much depends on how exactly Meta's system will work in practice. But between moderation changes and community guidelines revisions, Facebook, Instagram, and Threads are heading toward a world where anyone can say gay and trans people have a “mental illness,” where AI sloppiness will spread even more aggressively, where outrageous claims will spread unchecked , where truth itself is malleable.

    You know: just like X.