Skip to content

Elon Musk’s Twitter makes meta look smart

    It was the first day of April 2022, and I was sitting in the conference room of a law firm in midtown Manhattan during a meeting of Meta’s Oversight Board, the independent body that scrutinizes its substantive decisions. And for a few minutes, desperation seemed to set in.

    The topic at hand was Meta’s controversial Cross Check program, which gave special treatment to posts from certain power users – celebrities, journalists, government officials, and the like. For years, this program operated in secret, and Meta even misled the board about its scope. When details of the program were leaked The Wall Street Journal, it became clear that millions of people received that special treatment, meaning their posts were less likely to be deleted when they were reported by algorithms or other users for breaking rules against things like hate speech. The idea was to prevent errors in cases where errors would have more impact – or embarrassment to Meta – due to the speaker’s prominence. Internal documents revealed that Meta researchers had doubts about the project’s veracity. Only after that exposure did Meta ask the board to review the program and recommend what the company should do with it.

    The meeting I witnessed was part of that reckoning. And the tone of the discussion led me to wonder if the board would suggest that Meta shut down the program altogether, in the name of fairness. “The policy must be for all people!” exclaimed one board member.

    That didn’t happen. This week, the social media world took a break from watching the opera content moderation trainwreck Elon Musk leads on Twitter, as the Oversight Board finally delivered its Cross Check report, delayed due to foot-dragging by Meta when providing information. (It never gave the board a list indicating who received special permission to prevent a takedown, at least until someone took a closer look at the post.) The conclusions were damning. Meta claimed the purpose of the program was to improve the quality of substantive decisions, but the board decided it was more to protect the company’s business interests. Meta never set up processes to monitor the program and assess whether it was fulfilling its mission. The lack of transparency to the outside world was distressing. Finally, Meta all too often failed to deliver the quick personalized action that saved those messages from quick deletions. There were just too many of those cases for Meta’s team to handle. They often stayed awake for days before receiving secondary attention.

    The best example, seen in the original WSJ report, was a message from Brazilian football star Neymar, who in September 2019 posted a sexual image without the consent of the person concerned. Due to the special treatment he received for being part of the Cross Check elite, the image – a flagrant policy violation – garnered more than 56 million views before finally being taken down. The program was intended to reduce the impact of errors made in the content decision stimulate the impact of terrible content.

    Still, the board did not advise Meta to shut down Cross Check. Instead, it called for a review. The reasons are not an endorsement of the program in any way, but an acknowledgment of the fiendish difficulty of content moderation. The subtext of the Oversight Board’s report was the hopelessness of believing it was possible to get things right. Meta, like other platforms that give users a voice, had long emphasized growth over caution and hosted massive amounts of content that would require huge expenditures by the police. Meta spends many millions on moderation, but still makes millions of mistakes. Seriously reducing those errors costs more than the company is willing to spend. The idea of ​​Cross Check is to minimize the error rate on messages from the most important or prominent people. If a celebrity or statesman used his platform to address millions, Meta didn’t want to screw it up.