Skip to content

CNET published AI-generated stories. Then the staff pushed back

    In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence on topics such as personal finance, which turned out to be riddled with errors. Today, the human members of the editorial board have unionized and are calling on their bosses to provide better working conditions and more transparency and accountability around the use of AI.

    β€œIn this time of instability, our diverse content teams need job protection, fair compensation, editorial independence and a voice in the decision-making process, especially as automated technology threatens our jobs and reputations,” reads the mission statement from the CNET Media Workers Union, with more than 100 members, including writers, editors, video producers, and other content creators.

    While organizing began before CNET management began rolling out AI, the workers could become one of the first unions to force its bosses to put up guardrails around the use of content produced by generative AI services like ChatGPT. Any deal struck with CNET’s parent company, Red Ventures, could set a precedent for how companies approach the technology. Multiple digital media outlets have recently cut staff, with some like BuzzFeed and Sports Illustrated simultaneously embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.

    In Hollywood, AI-generated writing has sparked a workers’ revolt. Striking screenwriters want studios to agree to ban AI authorship and never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal and instead offered to hold annual meetings to discuss technological advances. CNET’s screenwriters and staff are both represented by the Writers Guild of America.

    While CNET bills itself as “your guide to a brighter future,” the 30-year-old publication clumsily stumbled into the new world of generative AI late last year that can create text or images. In January, the science and technology website Futurism revealed that in November CNET had quietly begun publishing AI-written explanations such as “What is Zelle and how does it work?” The stories were under the byline “CNET Money Staff,” and readers had to hover over them to learn that the articles were written “using automation technology.”

    A deluge of embarrassing revelations followed. The Verge reported that more than half of AI-generated stories contained factual errors, leading CNET to sometimes issue lengthy corrections to 41 of the 77 bot-written articles. The tool used by editors was also found to have plagiarized work from competing news outlets, as generative AI usually does.

    Then-editor-in-chief Connie Guglielmo later wrote that a plagiarism detection tool had been misused or failed and that the site was developing additional checks. A former staffer demanded that her byline be removed from the site, fearing that AI would be used to update her stories in an attempt to drive more traffic from Google’s search results.

    In response to the negative attention given to CNET’s AI project, Guglielmo published an article stating that the outlet had tested an “internally designed AI engine” and that “AI engines, like humans, make mistakes.” Nevertheless, she promised to make some changes to the site’s disclosure and citation policies and to continue the robot authorship experiment. In March, she stepped down from her position as editor-in-chief and now leads the outlet’s AI editing strategy.