The promise is compelling: AI compresses the mechanical work of coding, prototyping, writing, and designing. Teams that integrate it well can produce more viable options in the same time. The cost of a first draft goes down. The speed of iteration goes up.
This is substantially true. And it creates a trap that's easy to fall into: the belief that AI has fixed the underlying constraint, when in practice it has relocated it.
Understanding which is which is the difference between a team that AI makes genuinely faster and a team that AI makes faster at building the wrong things.
The Constraint Relocation Problem
Here's what AI doesn't change: the decision of what to build, the quality of the design, the clarity of the spec, the coordination cost between functions, the management of dependencies, and the judgment of what's worth building.
When implementation gets cheaper, these upstream activities become relatively more expensive. Not in absolute dollars — in proportion to the total cost of shipping.
This is the constraint relocation problem.
A team that was constrained by engineering implementation speed finds that AI has removed that constraint. Now they're constrained by something else: the design review that hasn't happened, the product decision that hasn't been made, the scope that keeps expanding because it feels cheap to add more.
AI doesn't make these problems worse. But it reveals them. When the thing that was holding you back is removed, the next thing in line becomes the bottleneck. If you don't know what that thing is, AI just makes you wait faster.
The Review Bottleneck
One of the most predictable constraint relocations in AI-augmented teams: review becomes the bottleneck.
Human review — of AI-generated code, AI-generated designs, AI-generated specs — is still slow. It's bounded by the attention and expertise of the reviewers. And AI, if anything, increases the demand for thoughtful review, because the volume of output is higher and the proportion that needs scrutiny is also higher.
Example: AI lets a team generate six implementation variants and open three PRs where they used to open one. Engineering review now has to compare approaches, design has to confirm the UX states still make sense, product has to choose which tradeoff matches the customer promise, legal may need to review generated copy, and support needs to know which behavior is launching. Generation sped up. The review surface exploded.
A team that adopts AI for generation without investing in review capacity and review quality will find that review backs up. The implementation is fast. The approval path is slow. The constraint has moved from building to evaluating.
The Net Effect
AI makes the teams that already had good systems faster. The ones that had scope discipline ship more of the right things faster. The ones that had clear ownership and decision rights have faster implementation with the same clarity. The ones that had good review processes integrate AI without quality loss.
For teams that didn't have those things: AI makes the problems they had more visible and doesn't fix them. Faster implementation on top of unclear direction, unstable scope, and slow decisions is not an improvement — it's a more expensive version of the same problem.
Build the system. Use AI to accelerate the parts that are worth accelerating. Don't mistake the accelerator for the engine.
