Fable, a popular social media app that describes itself as a haven for “bookworms and binge watchers,” created an AI-powered year-end summary that summarizes what books users will read in 2024. It was meant to be playful and fun, but some of the summaries took on a strangely combative tone. For example, writer Danny Groves' summary asked if he's “ever in the mood for a heterogeneous, cis white male perspective” after calling him a “diversity lover.”
Book influencer Tiana Trammell's summary, meanwhile, ended with the following advice: “Just remember to look for a white author every now and then, okay?”
Trammell was stunned, and she quickly realized she wasn't alone after sharing her experience with Fable's recaps on Threads. “I have received several messages,” she says, from people whose summaries had inappropriately commented on “disability and sexual orientation.”
Since the debut of Spotify Wrapped, annual digest features have become ubiquitous across the web, giving users a look at the number of books and news articles they've read, the songs they've listened to, and the workouts they've completed. Some companies are now using AI to fully produce or improve the presentation of these metrics. Spotify, for example, now offers an AI-generated podcast in which robots analyze your listening history and make guesses about your life based on your tastes. Fable jumped on the trend by using OpenAI's API to generate summaries of its users' reading habits over the past twelve months, but didn't expect the AI model to spit out commentary that took on the air of an anti-woke pundit . .
Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. “We are deeply sorry for any pain some of our Reader summaries may have caused this week,” the company wrote in the caption. “We will do better.”
Kimberly Marsh Allee, head of Fable's community, told WIRED that the company is working on a series of changes to improve its AI summaries, including an opt-out option for people who don't want them and clearer disclosures indicating they have AI summaries are. generated. “For now, we've removed the part of the model that playfully grills the reader, and instead the model simply summarizes the user's taste in books,” she says.
For some users, adjusting the AI doesn't feel like an adequate response. Fantasy and romance writer AR Kaufer was stunned when she saw screenshots of some of the recaps on social media. “They should say they are doing away with AI completely. And they need to make a statement, not only about the AI, but also with an apology to those affected,” Kaufer says. “This 'apology' on Threads comes across as disingenuous, saying the app is 'playful', as if it somehow excuses the racist/sexist/ableist quotes.” In response to the incident, Kaufer decided to delete her Fable account.
So did Trammell. “The appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented security measures to best ensure that no further platform users are exposed to harm,” she says.