If AI changes a task, it usually changes the role around it.

This is where many adoption plans stay too shallow. They automate a piece of work, train people on the tool, and leave the role definition untouched. The job description, success metrics, interfaces, quality standards, and manager expectations remain the same.

Then people behave inconsistently, which should surprise nobody.

A role is not a bundle of tasks. It is a set of accountabilities, judgments, interfaces, and standards. When AI changes how work is produced, the role has to be redesigned around the new shape of the work.

Take a customer support agent. If AI drafts replies and suggests resolutions, the agent's value shifts. They still need product knowledge and customer empathy, but they also need judgment about when the draft is safe, when the case is unusual, when tone matters more than speed, and when the issue points to a product defect. The role becomes less about typing every answer from scratch and more about resolution quality, exception judgment, and learning capture.

If the metrics stay focused only on handle time, the redesign fails. The agent will optimize for speed even when AI makes false confidence cheap.

Take a sales rep. AI can research accounts, summarize calls, draft follow-ups, and update CRM fields. The rep's role should shift toward account judgment: which signal matters, what the buyer actually cares about, where the deal is at risk, who must be multi-threaded, and what next step changes the deal. If leadership treats AI as a way to make reps send more emails, they have redesigned nothing.

Take a finance analyst. AI can explain variance, draft commentary, and flag anomalies. The analyst's role should shift toward source judgment, causal reasoning, scenario interpretation, and decision support. If the analyst becomes a copy editor for generated commentary, the company has made the job worse.

Role redesign starts with a blunt question: what should this person be better at now that AI handles some production work?

The answer may be judgment. It may be customer trust. It may be exception handling. It may be system improvement. It may be quality ownership. It may be cross-functional translation. Whatever it is, the role needs to say so.

Then update the interfaces.

Who does this role now interact with? If AI reduces one handoff but creates a new review path, the interface changed. If the person now sends exceptions to a specialist instead of escalating everything to a manager, the interface changed. If the person is responsible for improving the playbook based on corrections, the interface changed.

Update the standards.

What counts as good work now? A fast AI-assisted answer may be bad work if it misses account context. A polished generated memo may be bad work if the assumptions were never checked. A clean CRM summary may be bad work if it hides the actual deal risk.

Update the skills.

Prompting is only a small part of the skill change. People need to understand source quality, confidence, review criteria, edge cases, escalation thresholds, and the business consequences of errors. In many roles, taste and judgment matter more because production friction is lower.

Update the manager expectation.

Managers should stop asking only whether people used AI. That is a weak question. Better questions: did the work improve, did the person make better judgments, did exceptions route cleanly, did quality hold, did the workflow learn from corrections?

Role redesign does not mean writing a new HR document every time a tool changes. It means making the new operating expectation explicit enough that people can perform the job well.

A simple role redesign note can cover:

  • what AI now handles
  • what the person still owns
  • which judgments matter more
  • what must be reviewed
  • what gets escalated
  • what quality means
  • how corrections improve the system
  • which metrics no longer fit

If that note is hard to write, the adoption plan is probably underdesigned.

AI should make strong people more leveraged, not turn them into nervous supervisors of machine output. That requires changing the role, not only the task list.


This is part 7 of 10 in Work Design for the AI Era.