Anthropic’s Claude Code gets ‘safer’ auto mode
Anthropic has launched an “auto mode” for Claude Code, a new tool that lets AI make permissions-level decisions on users’ behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.
Claude Code is capable of acting independently on users’ behalf, a useful but risky feature as it can also do things users don’t want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chance to try again or ask a user to intervene.
Right now, auto mode is only available as a research preview for Team plan users. Anthropic says access will expand to include Enterprise and API users in “the coming days.”
Anthropic warns the tool is experimental and “doesn’t eliminate” risk entirely, recommending developers use it in “isolated environments.”
You may be interested

Altman vs. Elon for the future of OpenAI
new admin - Apr 27, 2026Sam Altman and Elon Musk are set to face off in a high-stakes trial that could alter the future of…

Inside the White House Correspondents’ Dinner when shots rang out: Here’s what CBS News journalists saw and heard.
new admin - Apr 27, 2026When a gunman charged a security checkpoint outside the hotel ballroom where the White House Correspondents' Dinner was taking place…

Timberwolves vs. Nuggets Game 5 picks with Anthony Edwards out
new admin - Apr 27, 2026[ad_1] NEWYou can now listen to Fox News articles! It was a fairly solid weekend of NBA Playoff picks for…






























