Anthropic hands Claude Code more control, but keeps it on a leash

Why it matters: This move pushes AI autonomy, promising faster development but raising questions about transparency and control.
- Anthropic's new "auto mode" for Claude Code gives the AI more control, enabling it to decide which actions are safe to take without constant human approval.
- Auto mode incorporates AI safeguards to review each action for risky behavior or prompt injection before execution, automatically proceeding with safe actions and blocking unsafe ones.
- This feature builds on existing autonomous coding tools from companies like GitHub and OpenAI but shifts the decision-making authority for permissions from the user to the AI itself.
- Anthropic has not yet detailed the specific criteria its safety layer uses to distinguish safe from risky actions, a point developers will likely want clarified for wider adoption.
- The update is part of a larger push by Anthropic, following the launch of Claude Code Review and Dispatch for Cowork, to create more autonomous AI agents for developers.
Anthropic is advancing AI autonomy with its new "auto mode" for Claude Code, allowing the AI to independently decide which actions are safe to execute, a significant step beyond current 'vibe coding' practices. This innovation, currently in research preview, aims to balance speed and control by integrating AI safeguards to prevent risky behaviors and prompt injection attacks, reflecting a broader industry trend toward more self-sufficient AI tools.

