Anthropic Denies It Could Sabotage AI Tools During War

CommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyAnthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war.“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote. “Anthropic does not have the access required to disable the technology or alter the model’s behavior before or during ongoing operations.”The Pentagon has been sparring with the leading AI…

Read more on Wired