The AI models from Anthropic can possibly help analyze classified documents, but the company draws the line with domestic surveillance. That limitation is said to be angry the Trump government.
On Tuesday, Semafor reported that anthropic growing hostility of the Trump administration is confronted with the limitations of the AI company on the use of law enforcement of its Claude models. Two senior civil servants from the White House told the outlet that federal contractors who work with agencies such as the FBI and the secret service have encountered oppositions when using Claude for monitoring tasks.
The friction stems from Anthropic's user policy that prohibits domestic surveillance applications. The officials, who spoke with Semafor anonymously, said they are concerned that anthropic policy selectively enforces his policy on the basis of politics and vague terminology that makes a broad interpretation of the rules possible.
The restrictions affect private contractors who work with law enforcement agencies that need AI models for their work. In some cases, Anthropic Claude models are the only AI systems that have been erased for top-secret security situations via the Govcloud of Amazon Web Services, according to the officials.
Anthropic offers a specific service for national security customers and has concluded a deal with the federal government to provide its services to agencies for a nominal reimbursement of $ 1. The company also works together with the Ministry of Defense, although the policy still forbids the use of its weapon development models.
In August, OpenAi announced a competitive agreement to deliver more than 2 million federal executive branch employees to Chatgpt Enterprise Access for $ 1 per agency for one year. The deal came one day after the General Services Administration had signed a general agreement, so that OpenAi, Google and Anthropic Tools can deliver to federal employees.