Anthropic adds Claude 4 security measures to limit risk of users developing weapons
May 23, 2025
Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.
The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.
The company, which is backed by Amazon
Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.
The company said Sonnet 4 did not need the tighter controls.
Jared Kaplan, Anthropic’s chief science officer, noted that the advanced nature of the new Claude models has its challenges.
“The more complex the task is, the more risk there is that the model is going to kind of go off the rails … and we’re really focused on addressing that so that people can really delegate a lot of work at once to our models,” he said.
The company released an updated safety policy in March addressing the risks involved with AI models and the ability to help users develop chemical and biological weapons.
Search
RECENT PRESS RELEASES
Related Post