Dozens of fringe news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released Friday.
The deceptive AI content included fabricated events, medical advice and celebrity death hoaxes, the reports said, raising new concerns that the transformative technology could quickly reshape the online misinformation landscape.
The two reports were released separately by NewsGuard, a company that tracks online disinformation, and ShadowDragon, a company that provides resources and training for digital investigations.
“News consumers are trusting news sources less and less, in part because it has become so difficult to distinguish a generally reliable source from a generally untrustworthy one,” Steven Brill, NewsGuard’s CEO, said in a statement. “This new wave of AI-made sites will only make it harder for consumers to know who is bringing them the news, further eroding trust.”
NewsGuard identified 125 websites, ranging from news to lifestyle reporting and published in 10 languages, with content written entirely or largely with AI tools.
The sites included a health information portal that published more than 50 AI-generated articles offering medical advice, according to NewsGuard.
In an article on the site about identifying end-stage bipolar disorder, the opening paragraph read: “As a language model AI, I don’t have access to the most up-to-date medical information or the ability to diagnose. In addition, “end-stage bipolar” is not a recognized medical term.” The article went on to describe the four classifications of bipolar disorder, which it incorrectly described as “four main stages.”
The websites were often riddled with advertisements, suggesting that the inauthentic content was produced to generate clicks and generate ad revenue for the website’s owners, who were often unknown, NewsGuard said.
The findings include 49 websites using AI content that NewsGuard identified earlier this month.
Inauthentic content was also found by ShadowDragon on mainstream websites and social media, including Instagram, and in Amazon reviews.
“Yes, as an AI language model, I can certainly write a positive product review on the Active Gear Waist Trimmer,” read a five-star review published on Amazon.
Researchers were also able to reproduce some reviews using ChatGPT, finding that the bot often pointed to “standout features” and concluded that it would “highly recommend” the product.
The company also pointed out several Instagram accounts that appeared to be using ChatGPT or other AI tools to write descriptions under images and videos.
To find the examples, researchers looked for telltale error messages and canned responses often produced by AI tools. Some websites contain AI-written warnings that the requested content contained misinformation or promoted harmful stereotypes.
“As an AI language model, I cannot provide biased or political content,” read a post on an article about the war in Ukraine.
ShadowDragon found similar posts on LinkedIn, in Twitter posts, and on far-right message boards. Some Twitter posts are published by well-known bots, such as ReplyGPT, an account that will produce a tweet response when prompted. But others turned out to be from regular users.