The first serious question in an AI system is not what model it uses. It is who is acting.
That sounds obvious until an agent sends a message, queries a database, updates a CRM field, writes a file, or opens a support ticket. Is that Antoine acting through a tool? Is it the support team? Is it a service account? Is it a workflow owned by operations? If something goes wrong, who had permission, who approved the action, and who can revoke it?
Many early AI systems dodge this by hiding behind the human user. The AI acts with whatever the human can access. That is convenient and dangerous. It works for drafts and summaries. It gets messy when the system starts taking actions across tools.
Humans have identity. Services have identity. AI agents need identity too.
An agent identity does not have to be dramatic. It means the system can say: this specific agent, owned by this team, running this workflow, was allowed to perform these actions, under these conditions, for this user or customer, during this time window.
Without that, permissions become fog.
A sales research agent may need read access to CRM accounts, call notes, website data, and public filings. It probably does not need permission to change opportunity stage. A support triage agent may need to classify tickets, draft replies, and suggest refunds. It should not issue refunds without a rule or approval. An engineering agent may need repo access and CI logs. It should not deploy to production because someone copied a powerful API key into its environment.
This is not anti-autonomy. It is how autonomy becomes usable.
The control plane needs a permission model that handles at least five objects: human users, agents, workflows, tools, and data scopes. The relationships between them matter. A user may be allowed to run an agent, but the agent should still have narrower permissions than the user. A workflow may be allowed to use a tool, but only in dry-run mode. A model may be allowed to see a document, but not store it in long-term memory.
The practical design starts with scopes.
Read is different from write. Draft is different from send. Recommend is different from execute. Query is different from export. Create is different from delete. Temporary context is different from retained memory. Internal notes are different from customer-visible messages.
Most permission mistakes come from collapsing those distinctions because the first version was easier to build with a broad token.
The second design issue is delegation. If a manager asks an agent to analyze team performance, the agent should not automatically inherit access to every private HR note the manager can see. If an executive asks for customer risk, the workflow should respect customer, region, contract, and confidentiality boundaries. If a user invokes an agent inside Slack, Slack membership should not become the permission model for every connected system.
The third issue is revocation. AI systems create new operational debris: prompts, memories, cached context, generated artifacts, logs, embeddings, tool sessions, and derived summaries. Removing a person's access is not enough if the agent can still use stale context gathered while access existed. Revocation needs to reach the places where context persists.
This is where many companies will discover that their AI rollout exposed weak identity hygiene they already had. Shared accounts. Long-lived keys. Over-broad admin roles. No ownership metadata. No clean offboarding. AI did not create the mess. It made the mess operationally active.
A usable permission model should be boring in the right way:
- every agent has an owner
- every workflow has a purpose
- every tool permission has a scope
- every elevated action has a reason
- every temporary grant expires
- every retained memory has access rules
- every exception is logged
The goal is not to make every AI action wait for approval. That would kill the point. The goal is to let low-risk actions run freely because the boundaries are clear.
Identity is the start of the control plane because everything else depends on it. Budgets need owners. Logs need actors. Tool calls need scopes. Memory needs access rules. Escalation needs accountability.
If the system cannot say who acted, it cannot really say what happened.
And if it cannot say what happened, the company is trusting the most powerful part of the new workflow to a shrug.
This is part 2 of 10 in The AI Control Plane.
