The old unit of work was easy to describe: a person performs a task.

That description was never fully true, but it was close enough for many operating systems. A sales rep researches an account. A support agent answers a ticket. A finance analyst explains variance. A product manager writes a brief. A recruiter screens a candidate. A manager reviews the output.

AI breaks the simplicity of that picture.

The useful unit of work is now a loop: human, AI, context, review, exception, decision, and learning. Sometimes the AI assists the human. Sometimes the human reviews the AI. Sometimes the AI reviews the human. Sometimes the system handles the normal case and humans handle exceptions. Sometimes the most important work is deciding which mode applies.

If the company still manages this as "person performs task," it will misread the work.

Consider a finance analyst writing a monthly variance explanation. In the old model, the analyst pulls data, checks anomalies, writes commentary, sends it to a manager, and updates the packet. With AI, the model can draft commentary, compare current numbers against prior periods, flag unusual changes, and suggest questions. That does not mean the analyst's work got reduced to editing. The work unit changed.

Now the analyst must know which sources are reliable, what context the model lacks, which anomalies matter, which narrative is plausible, and when a variance needs escalation. The manager must know what was generated, what was checked, what assumptions were used, and what changed after review. The output may arrive faster, but the work requires a different control surface.

The same applies almost everywhere.

In customer success, the unit is not "write the QBR deck." It is prepare an account judgment: usage, value realized, risk, expansion potential, unresolved issues, executive narrative, and next action. AI can assemble pieces, but the work unit only improves if the account owner can make a better judgment.

In support, the unit is not "draft a reply." It is resolve the issue with the right level of confidence, tone, evidence, and escalation. AI can draft, classify, summarize, and suggest fixes. The work unit fails if nobody designs the review and exception path.

In product, the unit is not "write requirements." It is turn customer evidence, constraints, tradeoffs, and desired behavior into a decision artifact the team can build from. AI can help produce the artifact. It cannot decide by itself which ambiguity should be resolved before engineering starts.

The new unit of work needs a clearer contract.

What is the input? A ticket, account, incident, opportunity, metric variance, customer request, bug report, contract clause, candidate profile, or product idea.

What is the desired output? A resolved case, recommended decision, approved exception, customer response, updated record, implementation plan, shipped change, or documented risk.

Who owns the outcome? This must be a person or clearly named role. AI can participate in the work. It cannot be accountable to a customer, regulator, executive, or team.

What does AI do? Draft, retrieve, classify, compare, check, route, generate options, summarize context, monitor signals, propose actions.

What does the human do? Frame, judge, approve, reject, escalate, communicate, negotiate, decide, learn, and own the consequences.

What gets reviewed? Not everything deserves the same review. Some outputs need full review. Some need spot checks. Some need threshold-based review. Some need review only when confidence is low or risk is high.

What becomes an exception? The exception path is where many AI workflows either become safe or become chaos. A good exception is defined before the work starts: low confidence, missing context, high-value customer, legal risk, angry tone, policy ambiguity, data mismatch, or unusual request.

How does the loop learn? Corrections should improve prompts, playbooks, source data, routing rules, examples, or policy. If every correction lives only in a person's head, the system does not get better.

This is the new design unit. It is bigger than a task and smaller than an org chart.

That middle layer is where many AI programs fail. Executives talk about transformation. Tool teams talk about features. Managers are left to figure out how the actual work should change.

The team that wins makes the unit of work explicit. It can point to a workflow and say: here is the input, here is the owner, here is what AI does, here is what humans do, here is what we review, here is what we escalate, here is the quality bar, here is how we learn.

Once that exists, AI becomes part of an operating system.

Without it, AI is just another contributor to the pile.


This is part 3 of 10 in Work Design for the AI Era.