Anthropic has new rules for a more dangerous AI landscape

Aug 16, 2025 12:05 AM - 6 months ago 244289

Anthropic has updated the usage argumentation for its Claude AI chatbot successful consequence to increasing concerns astir safety. In summation to introducing stricter cybersecurity rules, Anthropic now specifies immoderate of the astir vulnerable weapons that group should not create utilizing Claude.

Anthropic doesn’t item the tweaks made to its weapons argumentation in the station summarizing its changes, but a comparison betwixt the company’s aged usage policy and its caller 1 reveals a notable difference. Though Anthropic antecedently prohibited the usage of Claude to “produce, modify, design, market, aliases administer weapons, explosives, vulnerable materials aliases different systems designed to origin harm to aliases nonaccomplishment of quality life,” the updated type expands connected this by specifically prohibiting the improvement of high-yield explosives, on pinch biological, nuclear, chemical, and radiological (CBRN) weapons.

In May, Anthropic implemented “AI Safety Level 3” protection alongside the motorboat of its caller Claude Opus 4 model. The safeguards are designed to make the exemplary much difficult to jailbreak, arsenic good arsenic to thief forestall it from assisting pinch the improvement of CBRN weapons.

In its post, Anthropic besides acknowledges the risks posed by agentic AI tools, including Computer Use, which lets Claude return control of a user’s computer, arsenic good arsenic Claude Code, a instrumentality that embeds Claude straight into a developer’s terminal. “These powerful capabilities present caller risks, including imaginable for scaled abuse, malware creation, and cyber attacks,” Anthropic writes.

The AI startup is responding to these imaginable risks by folding a caller “Do Not Compromise Computer aliases Network Systems” conception into its usage policy. This conception includes rules against utilizing Claude to observe aliases utilization vulnerabilities, create aliases administer malware, create devices for denial-of-service attacks, and more.

Additionally, Anthropic is loosening its argumentation astir governmental content. Instead of banning the creation of each kinds of contented related to governmental campaigns and lobbying, Anthropic will now only prohibit group from utilizing Claude for “use cases that are deceptive aliases disruptive to antiauthoritarian processes, aliases impact elector and run targeting.” The institution besides clarified that its requirements for each its “high-risk” usage cases, which travel into play erstwhile group usage Claude to make recommendations to individuals aliases customers, only use to consumer-facing scenarios, not for business use.

Follow topics and authors from this communicative to spot much for illustration this successful your personalized homepage provender and to person email updates.

More