Identity and permissions have to shape what context an agent can see, infer, remember, and reuse.

Agent access goes beyond tool permissions because the context itself can be sensitive. A user may be allowed to ask for a customer summary but not allowed to see internal churn-risk labels, pricing exceptions, employee notes, legal strategy, or another region's pipeline.

The context layer should evaluate who is asking, which agent is acting, what task is being performed, where the output will go, and whether the agent is writing externally or working internally. A fact that belongs in an internal escalation may be unsafe in a customer email.

Inference needs boundaries too. If an agent can see support complaints, payment delays, and usage drops, it may infer churn risk even if the explicit risk field is hidden. That can be useful inside the right workflow. It can also leak sensitive judgment if reused in the wrong context.

Memory makes this sharper. A fact shown for one task should not automatically become available later. Temporary access should expire. Redacted context should stay redacted. Internal labels should not be carried into future drafts as casual background knowledge.

This is why permissioning belongs in the context layer as well as the application layer. The agent needs context filtered before reasoning rather than blocked only at the final tool call.

A simple test helps: would the person requesting this task be allowed to know the fact, use the fact, and send the fact to the intended audience? If any answer is no, the context boundary has to intervene.


This is part 5 of 10 in The AI Context Layer.