The worst place to start an AI program is the easiest demo.

The easiest demo is usually a visible task with a before-and-after contrast: write this email, summarize this meeting, generate this code. The demo works. People nod. The tool feels useful.

Then the organization quietly assumes that useful equals important.

It does not.

Before automating, find the bottleneck. The bottleneck is the part of the system that limits overall throughput. It may be a human reviewer, an approval queue, an overloaded specialist, a brittle integration step, or a customer feedback loop. It is often not the task people complain about most.

This is where AI adoption needs more operational discipline. If you automate away from the bottleneck, you mostly create local efficiency. If you automate toward the bottleneck, you have a chance to change system output.

Start with a simple map. Pick one workflow. Write down the path between trigger and accepted outcome. For each step, record active work time, wait time, owner, review load, and rework. Do not make it fancy. A whiteboard is enough.

Then ask where work accumulates. Not where people are annoyed. Where work actually waits.

The answer can be uncomfortable. A team may discover that the bottleneck is a senior leader who approves too many things. Or a product manager who becomes the only source of clarification. Or a legal queue that gets involved too late. Or a QA process that catches predictable issues after too much work has already been done. Or an operating review where decisions are deferred because the packet lacks evidence.

Once the bottleneck is visible, AI use cases become easier to rank.

If review is the bottleneck, do not begin by producing more review objects. Use AI to improve pre-review quality, detect common issues, assemble evidence, and narrow the reviewer’s attention to exceptions.

If decisions are the bottleneck, do not generate more analysis. Use AI to prepare decision packets with options, tradeoffs, evidence, and a clear decision ask.

If intake is the bottleneck, do not optimize downstream execution first. Use AI to classify, de-duplicate, and reject bad inputs earlier.

If rework is the bottleneck, do not accelerate the first draft. Use AI to catch ambiguity, compare against standards, and force missing context to the surface before work starts.

If customer learning is the bottleneck, do not celebrate faster internal documents. Use AI to compress research synthesis, spot patterns in feedback, and turn evidence into roadmap or operating decisions.

The point is not that AI must touch only the bottleneck. Teams can still use AI broadly for convenience and craft. But leadership attention and ROI claims should focus on constraints.

A practical prioritization screen helps:

  1. What is the system outcome?
  2. What currently limits that outcome?
  3. Does this AI use case reduce that limit?
  4. What new limit will appear if it works?
  5. How will we measure the change?

The fifth question is the one most pilots skip.

Define measurement before the intervention. If the bottleneck is review, measure queue length, review hours per accepted item, first-pass acceptance, defects that escaped review, and elapsed time between ready-for-review and acceptance. If the bottleneck is decisions, measure decision latency, revisits, packet quality, and lag after the evidence is available. If the bottleneck is intake, measure bad inputs rejected, duplicate work prevented, and cycle time between request and accepted scope.

Now the AI program has a target.

This discipline also protects teams from blaming AI for the wrong problem. If AI speeds a constrained step and the system still does not improve, the bottleneck may have moved. That is progress, if you notice it. If AI was aimed at a non-constraint and nothing changes, that is not an AI failure. It is a targeting failure.

The easiest demo asks, “Can AI do this?”

The operator’s question is better: “If AI improves this, what system constraint changes?”

If you cannot answer, you are probably automating noise.


This is part 4 of 10 in From Productivity to Throughput.