Skip to content

A lawsuit against perplexity conjures up fake news hallucinations

    Perplexity did not respond to requests for comment.

    In an emailed statement to WIRED, News Corp CEO Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, who understand that integrity and creativity are essential if we are to realize the potential of artificial intelligence,” the statement said. “Perplexity is not the only AI company misusing intellectual property and it is not the only AI company we will pursue with vigor and rigor. We have made it clear that we prefer to litigate rather than sue, but that for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

    However, OpenAI is facing its own accusations of trademark dilution. In New York Times v OpenAIthe Times alleges that ChatGPT and Bing Chat will attribute fabricated quotes to the Times, and accuses OpenAI and Microsoft of damaging their reputations through brand dilution. In one example cited in the lawsuit, the Times claims that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it was not; the Times states that factual reporting has debunked claims about the health benefits of moderate alcohol consumption.

    “Copying news articles to exploit substitute commercial generative AI products is unlawful, as we have made clear in our letters to Perplexity and our lawsuits against Microsoft and OpenAI,” said Charlie Stadtlander, NYT director of external communications. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step in ensuring publishers' content is protected from this type of misappropriation.”

    If publishers prevail in arguing that hallucinations can violate trademark law, AI companies could face “enormous problems,” said Matthew Sag, a professor of law and artificial intelligence at Emory University.

    “It is absolutely impossible to guarantee that a language model will not hallucinate,” says Sag. According to him, the way language models work by predicting words to sound correctly in response to cues is always a kind of hallucination; sometimes it just sounds more plausible than others.

    “We only call it a hallucination if it doesn't match our reality, but the process is exactly the same whether we like the outcome or not.”