Anthropic did not immediately respond to Ars' request for comment on how guardrails currently work to prevent the alleged jailbreaks, but publishers appear satisfied with the current guardrails in accepting the deal.
Whether AI training will infringe on song lyrics remains uncertain
Now the question of whether Anthropic has sufficient guardrails to block allegedly harmful output has been resolved, Lee wrote, allowing the court to focus on arguments regarding publishers' “request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers' lyrics to train future AI models.”
Anthropic said in its motion against the preliminary injunction that the relief should be denied.
“Whether generative AI companies can permissibly use copyrighted content to train unlicensed LLMs,” the Anthropic court said, “approximately two dozen copyright infringement cases are currently being litigated across the country, none of which tried to solve the problem in the shortened form. The stance of injunctive relief speaks volumes that no other plaintiff – including the record label of the parent company of any of the plaintiffs in this case – has sought injunctive relief for this conduct.
In a statement, Anthropic's spokesperson told Ars that “Claude is not designed to be used for copyright infringement, and we have developed numerous processes to prevent such infringement.”
“Our decision to enter into this provision is consistent with these priorities,” Anthropic said. “We continue to look forward to demonstrating that, consistent with existing copyright law, the use of potentially copyrighted material in the training of generative AI models is a typical fair use.”
It will likely take months for this lawsuit to be fully resolved, as the question of whether AI training constitutes a fair use of copyrighted works is complex and remains hotly contested in court. For Anthropic, the stakes could be high, with a loss potentially involving more than $75 million in fines, as well as an injunction potentially forcing Anthropic to disclose and destroy all copyrighted works in its training data.