AI gets interesting when it can use tools. It also gets dangerous there.
A model that drafts text can waste time or produce nonsense. A model that can use tools can change records, send messages, create tickets, move money, update code, query private systems, schedule meetings, trigger workflows, or annoy customers at scale.
The answer is not "never give agents tools." That is just opting out of the useful part. The answer is to design action boundaries.
Tool access should not be a yes-or-no decision. It should be a set of verbs with limits. Read, search, draft, simulate, create, update, send, delete, approve, refund, deploy, export. Each verb has a different blast radius. Treating them as one permission is how teams accidentally give an agent the keys to the building because it needed to check the lobby.
A practical control plane separates three layers.
First, discovery. The system can inspect, search, retrieve, and summarize. This is where most early AI use should live. The blast radius is mainly data exposure and bad interpretation.
Second, preparation. The system can draft, stage, simulate, propose, and open a pending change. This is where AI becomes useful without immediately taking irreversible action. A sales agent drafts account updates. A finance agent prepares a journal entry. An engineering agent opens a pull request. A support agent proposes a refund.
Third, execution. The system takes the action. It sends the email, changes the field, issues the refund, merges the code, updates the invoice, or closes the ticket.
Execution is not forbidden. It needs tighter rules.
The most useful pattern is graduated autonomy. Low-risk, reversible actions can run automatically. Medium-risk actions can run with sampling, thresholds, or post-action review. High-risk actions need pre-approval, dual control, or a narrower workflow.
For example, an agent can tag support tickets automatically if confidence is high and the tag only affects routing. It can draft customer replies freely. It may send replies automatically for narrow, low-risk categories after quality has been proven. It should not send legal, billing, security, or angry-customer responses without review.
The boundary is not "customer-facing bad, internal good." Internal actions can have large blast radius too. A bad CRM update can wreck forecasting. A wrong permission change can expose data. A sloppy code migration can create weeks of cleanup. A hallucinated executive summary can send leadership in the wrong direction.
Good tool boundaries are specific to the action, not the tool.
A CRM tool might allow read access to all assigned accounts, updates only to non-financial fields, bulk changes only in dry-run mode, and opportunity-stage changes only with a human approver. A GitHub tool might allow branch creation and pull requests, but block direct pushes to protected branches. A billing tool might allow invoice lookup and draft credit memos, but require approval for issuing credits.
Dry-run mode deserves more respect. It is one of the best ways to scale AI safely. The agent performs the whole reasoning path and produces the exact actions it would take, without committing them. A reviewer can see the proposed changes, catch mistakes, and approve as a batch. Over time, the system can learn which classes of dry-run actions are safe enough to automate.
The control plane should also enforce action budgets. Not just money. Volume. Frequency. Scope. A workflow that can update one record safely may not be safe when updating 10,000. A tool call that is fine once per ticket may be abusive if repeated in loops. A model should not be allowed to keep trying a destructive action until something accepts it.
Tool use also needs evidence. Every meaningful action should leave a trail: user or agent identity, input context, model, prompt version, tool called, parameters, output, approval status, result, and linked work object. Without that, operators are left debugging folklore.
The best AI systems will feel fast because most actions will not need manual review. That is possible only if the boundaries are real.
The goal is to make the agent powerful inside a well-designed box. Not a tiny box. A clear one.
Tool access is where AI leaves the chat window and enters operations. Treat it like operations.
This is part 3 of 10 in The AI Control Plane.
