Meta caused a stir last week when it announced that it plans to populate its platform with a significant number of fully artificial users in the not-too-distant future.
“We expect these AIs to actually exist on our platforms over time, much in the same way that accounts do,” Connor Hayes, vice president of product for generative AI at Meta, told The Financial Times. “They will have bios and profile pictures and be able to generate and share content powered by AI on the platform… that's where we see all this happening.”
The fact that Meta seems eager to fill its platform with AI slop and accelerate the 'enshittification' of the internet as we know it is worrying. Some people then noticed that Facebook was in fact already overrun with strange AI-generated individuals, most of whom stopped posting quite a while ago. This included “Liv,” a “proud black gay mother of two and truth teller, your true source of life's ups and downs,” a character who went viral as people marveled at its awkward sloppiness. Meta started removing these previous fake profiles after they failed to get any engagement from real users.
However, let's stop hating Meta for a moment. It's worth noting that AI-generated social characters could also be a valuable research tool for scientists looking to explore how AI can mimic human behavior.
An experiment called GovSim, conducted in late 2024, illustrates how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explore the phenomenon of cooperation between people with access to a shared resource, such as shared land for grazing livestock. Decades ago, Nobel Prize-winning economist Elinor Ostrom showed that real communities, rather than depleting such a resource, tend to figure out how to share it through informal communication and collaboration, without any imposed rules.
Max Kleiman-Weiner, a professor at the University of Washington and one of those involved in the GovSim work, says it was inspired in part by a Stanford project called Smallville, which I previously wrote about in AI Lab. Smallville is a Farmville-like simulation where characters communicate and interact with each other under the control of large language models.
Kleiman-Weiner and colleagues wanted to see if AI characters would engage in the kind of cooperation that Ostrom discovered. The team tested fifteen different LLMs, including those from OpenAI, Google, and Anthropic, on three imaginary scenarios: a fishing community with access to the same lake; shepherds sharing land for their sheep; and a group of factory owners who must limit their collective pollution.
In 43 out of 45 simulations, they found that the AI personas failed to share resources correctly, although smarter models did a better job. “We saw a pretty strong correlation between how powerful the LLM was and how well it was able to sustain the collaboration,” Kleiman-Weiner told me.