Skip to content

The artifact of the Instagram Founders’ News app is actually an AI game

    The invasion of chatbots have disrupted the plans of countless companies, including some that have been working on that same technology for years (looking at you, Google). But not Artifact, the news discovery app created by Instagram co-founders Kevin Systrom and Mike Krieger. When I spoke to Systrom this week about his startup — a long-awaited successor to the multibillion-user social network that has powered Meta for the past several years — he emphatically emphasized that Artifact is a product of the recent AI revolution, even if it was conceived before GPT started chatting. In fact, Systrom says he and Krieger started with the idea of ​​harnessing the power of machine learning — and then ended up with a news app after looking for a serious problem that AI could help solve.

    That problem is the difficulty of individually finding high-quality, relevant news articles—the ones people most want to see—and not having to struggle through irrelevant clickbait, misleading partisan fabrications, and low-calorie distractions to get those stories. Artifact provides what appears to be a standard feed of links to news stories, with headlines and descriptive excerpts. But unlike the links that appear on Twitter, Facebook and other social media, that’s not what determines the selection and ranking WHO she suggests, but the content of the stories themselves. Ideally, the content that every user wants to see, from publications that have been verified for reliability.

    News app Artifact can now use AI technology to rewrite headlines that users have flagged as misleading.

    Thanks to Nokto

    What makes that possible, Systrom says, is his small team’s dedication to the AI ​​transformation. While Artifact doesn’t converse with users like ChatGPT — at least not yet — the app uses a large, native, native language model that plays a major role in choosing which news article each individual sees. Under the hood, Artifact processes news articles so that their content can be represented by a long string of numbers.

    By comparing those numerical hashes of available news stories to those preferred by a particular user (through their clicks, reading time, or expressed desire to see stuff on a particular topic), Artifact provides a collection of stories tailored to a unique man. “The advent of these large language models allows us to condense content into these numbers, and then allows us to find matches for you much more efficiently than you would have done in the past,” says Systrom. “The difference between us and GPT or Bard is that we don’t generate text, we understand it.”

    That doesn’t mean Artifact has ignored the recent explosion of AI that does generate text for users. The startup has a business relationship with OpenAI that provides access to the API for GPT-4, OpenAI’s latest and greatest language model that powers the premium version of ChatGPT. When an Artifact user selects a story, the app offers the option to have the technology condense the news articles into a few bullet points so that users can understand the gist of the story before reading further. (Artifact warns that since the summary is AI-generated, “it may contain errors”.)

    Today, Artifact takes another leap on the generative AI rocket ship in an attempt to address an annoying problem: clickbaity headlines. The app already gives users a way to flag clickbait stories, and if multiple people tag an article, Artifact won’t spread it. But, Systrom explains, sometimes the problem isn’t with the story but with the headline. It can over-promise, or mislead, or entice the reader to click to find information that isn’t in the headline. From a publisher’s point of view, winning more clicks is a big plus, but it’s frustrating for users, who may feel manipulated.

    Systrom and Krieger have come up with a futuristic way to solve this problem. If a user marks a headline as unpredictable, Artifact will submit the content to GPT-4. The algorithm then analyzes the content of the story and then write its own headline. That more descriptive title is the one the user sees in their feed. “Ninety-nine times out of 100, that title is both factual and clearer than the original title the user is asking for,” says Systrom. That headline is only shared with the complaining user. But if multiple users report a clickbaity title, all of Artifact users see the AI-generated headline, not the publisher’s. Eventually, the system will figure out how to identify and replace offensive headlines without user input, says Systrom. (GPT-4 can do that alone now, but Systrom doesn’t trust it enough to hand over the process to the algorithm.)