Skip to content

Supreme Court evades ruling on scope of Article 230

    The Supreme Court on Thursday handed double victories to tech platforms by refusing to hold them liable for content posted by their users in two cases.

    In a case involving Google, the court tentatively rejected attempts to limit the scope of the law that exempts the platforms from liability for user content, section 230 of the Communications Decency Act.

    In a separate case involving Twitter, the court unanimously ruled that another law that allows lawsuits for complicity in terrorism did not apply to the ordinary activities of social media companies.

    The rulings have not definitively resolved the question of what responsibility platforms should have for the content posted and recommended on their sites, an issue that has become increasingly pressing as social media has become ubiquitous in modern life. But the court’s decision to provisionally approve the scope of Section 230, which dates back to 1996, was welcomed by the tech industry, which has long portrayed the law as integral to the development of the Internet.

    “Companies, scientists, content creators and civil society organizations that have joined us on this case will be reassured by this outcome,” Halimah DeLaine Prado, general counsel for Google, said in a statement.

    The Twitter case involved Nawras Alassaf, who was killed in a 2017 terrorist attack at the Reina nightclub in Istanbul, for which the Islamic State claimed responsibility. His family sued Twitter, Google and Facebook, saying they allowed ISIS to use their platforms to recruit and train terrorists.

    Judge Clarence Thomas, writing for the court, said the “plaintiffs’ allegations are insufficient to establish that these defendants aided and supported ISIS in carrying out the attack in question.”

    He wrote that the defendants passed on staggering amounts of content. “It seems that before every minute of the day, about 500 hours of video are uploaded to YouTube, 510,000 comments are posted on Facebook and 347,000 tweets are sent on Twitter,” Judge Thomas wrote.

    And he acknowledged that the platforms use algorithms to direct users to content that interests them.

    “So, for example,” Judge Thomas wrote, “someone who watches cooking shows on YouTube is more likely to see cooking videos and ads for cookbooks, while someone who likes to watch professorial lectures might see peer debates and ads for TED Talks.”

    “But,” he added, “not all content on defendants’ platforms is so benign.” Specifically, “ISIL has uploaded videos raising money for terror weapons and showing brutal executions of both soldiers and civilians.”

    The fact that the platforms failed to remove such content, Judge Thomas wrote, was not enough to establish liability for complicity, which he said required plausible charges that they “know so much and provided substantial assistance to ISIS that they are guilty were on the Reina attack. ”

    Prosecutors had not cleared that bar, Judge Thomas wrote. “The plaintiffs’ allegations are far from plausible to claim that the defendants aided and encouraged the Reina attack,” he wrote.

    The platforms’ algorithms have not changed the analysis, he wrote.

    “The algorithms appear agnostic as to the nature of the content, matching any content (including ISIS content) with any user more likely to view that content,” Justice Thomas wrote. “So the fact that these algorithms linked some ISIS content to some users does not convert defendants’ passive assistance into active complicity.”

    A ruling to the contrary, he added, would expose the platforms to possible liability for “any terrorist act by ISIS anywhere in the world.”

    The court’s decision in the case, Twitter v. Taamneh, No. 21-1496, allowed the justices not to rule on the scope of Section 230, a law designed to destroy what was then a burgeoning creation , called the Internet.

    Section 230 was in response to a decision holding an online bulletin board liable for what a user posted because the service had performed some content moderation. The provision read: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by any other information content provider.”

    Section 230 helped facilitate the rise of massive social networks like Facebook and Twitter by ensuring that the sites assumed no legal liability with every new tweet, status update, and comment. Limiting the scope of the law could expose the platforms to lawsuits alleging that they led people to posts and videos that promoted extremism, incited violence, damaged reputations and caused emotional distress.

    The case against Google was brought by the family of Nohemi Gonzalez, a 23-year-old student who was killed in a restaurant in Paris in November 2015 during terrorist attacks there that also targeted the Bataclan concert hall. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers.

    In a brief, unsigned opinion in the case, Gonzalez v. Google, No. 21-1333, the court said it “would not address the application of section 230 to a complaint that appears to have little or no plausible claim for relief to contain.” The court instead referred the case back to the appeals court “in light of our decision on Twitter”.

    It’s unclear what the ruling will mean for legislative efforts to eliminate or change the legal shield.

    A growing group of bipartisan lawmakers, academics and activists have become skeptical of Section 230, saying it has protected giant tech companies from the impact of disinformation, discrimination and violent content on their platforms.

    In recent years, they’ve put forward a new argument: that the platforms lose their protection when their algorithms recommend content, target ads, or introduce new connections to their users. These recommendation engines are ubiquitous, supporting features such as YouTube’s autoplay feature and Instagram’s suggestions for accounts to follow. Judges have largely rejected this reasoning.

    Members of Congress have also called for legislative changes. But political realities have largely prevented those proposals from gaining traction. Republicans, angry at tech companies removing posts from conservative politicians and publishers, want the platforms to remove less content. Democrats want the platforms to remove more, such as false information about Covid-19.

    Section 230 critics had mixed reactions to the court’s decision, or lack thereof, in the Gonzalez case.

    Senator Marsha Blackburn, a Tennessee Republican who has criticized big tech platforms, said on Twitter that Congress had to step in to reform the law because the companies “turned a blind eye” to harmful online activity.

    Hany Farid, a computer science professor at the University of California, Berkeley, who signed a letter supporting the Gonzalez family’s case, said he was encouraged that the court had not offered a full defense of the Section 230 liability shield.

    He added that he thought “the door is still open for a better case with better facts” to challenge the tech platforms’ immunity.

    Tech companies and their allies have warned that any changes to Section 230 would cause the online platforms to remove much more content to avoid potential legal liability.

    Jess Miers, legal counsel for Chamber of Progress, a lobbying group representing technology companies including Google and Meta, the parent company of Facebook and Instagram, said in a statement that the arguments in the case made it clear that “changing the interpretation of Section 230 would create more problems than it would solve.”

    David McCabe reporting contributed.