In recent years, a conspiracy theory called the “Dead Internet theory” has gained momentum as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions that can be found online. The theory posits that most social internet activity today is artificial and designed to manipulate people into engagement.
On Monday, software developer Michael Sayman launched a new AI-populated social networking app called SocialAI that feels like it brings that conspiracy theory to life, allowing users to interact exclusively with AI chatbots instead of other humans. It’s available in the iPhone app store , but so far it’s been met with sharp criticism.
After the creator announced SocialAI as ““A private social network where you receive millions of AI-generated comments with feedback, advice, and reflections on every post you make,” computer security expert Ian Coldwater joked on X. “This sounds like hell.” Software developer and frequent AI expert Colin Fraser expressed a similar sentiment: “I don’t mean this in a mean way or as a dunk or anything, but this really does sound like hell. Like hell with a capital H.”
SocialAI’s creator, Michael Sayman, 28, was previously a product lead at Google and has also bounced between Facebook, Roblox, and Twitter over the years. In an announcement on X , Sayman wrote that he’s dreamed of creating the service for years, but the technology wasn’t ready. He sees it as a tool that can help people who are lonely or rejected.
““SocialAI is designed to help people feel heard and provide a space for reflection, support, and feedback that functions as a close-knit community,” Sayman wrote. “It’s a response to all the times I’ve felt isolated, or like I needed a sounding board but didn’t have one. I know this app won’t solve all of life’s problems, but I hope it can be a small tool for others to reflect, grow, and feel seen.”
As The Verge reports in an excellent roundup of sample interactions, SocialAI lets users choose which types of AI followers they want, including categories like “supporters,” “nerds,” and “skeptics.” These AI chatbots then respond to users’ messages with short comments and reactions on almost any topic, including nonsense “lorem ipsum” text.
Sometimes bots can be too helpful. On Bluesky, a user asked for instructions on how to make nitroglycerin from common household chemicals and got several enthusiastic responses from bots with detailed steps, though different bots gave different recipes, none of which are completely accurate.
SocialAI’s bots have limitations, unsurprisingly. Aside from simply making up false information (which in this case may be more of a feature than a bug), they often use a consistent format of short responses that feels somewhat canned. Their simulated emotional range is also limited. Attempts to elicit strongly negative responses from the AI are generally unsuccessful, with the bots avoiding personal attacks even when users max out the trolling and sarcasm settings.