Give sales a commission and they'll close deals that shouldn't close. Give engineers velocity targets and they'll ship features that shouldn't ship. Give managers headcount targets and they'll hire people they don't need.

The failure mode isn't that people are greedy or short-sighted. It's that incentives create a local optimum that's invisible from inside the system. The person optimizing for the metric has every reason to believe they're optimizing for the goal. The gap between the two is usually not visible to them — because their incentive says: this is what success looks like.

This is the oldest problem in organizational design, and it's still the most consistently underestimated.

Note: This post covers incentives — what a system rewards and why that shapes behavior. Feedback loops (the mechanism through which system outputs become inputs) are covered in post 03. Bottlenecks (what constrains a system's throughput) are covered in post 05. Local optima — what happens when a component optimizes successfully for a metric while the system degrades — are covered in post 08.

How to Trace Incentive Misalignment

Before you assume an incentive is working as intended, trace it:

Who gets rewarded for what? Not stated goals — actual rewards. Look at compensation, promotions, recognition. What behavior is actually being reinforced?

What does the rewarded behavior produce at the system level? Sales closing deals — at what margin, with what retention, at what customer satisfaction? Engineers shipping features — at what quality, with what maintenance cost, with what user outcome? Managers growing headcount — at what productivity, with what team health, with what organizational velocity?

What's being measured versus what's actually being optimized? Most incentive problems are mismatches between what the system says it wants and what it's actually paying for.

The Concrete Failure Mode

The product team that ships more features than any prior quarter and gets celebrated. NPS is in free fall. Retention is declining. The team is hitting the metric. The system is failing.

In the post-mortem, the team says: we were rewarded for shipping features. We shipped features. They say: we didn't know retention was our responsibility. Nobody said retention was part of the goal. Nobody made the connection between feature velocity and retention. And they're right — the structure didn't make the connection visible. The incentive was on the output that was easiest to count.

The fix isn't to blame the team. The fix is to change what's being counted. And to make the team responsible for the downstream effects of their output — not as a punitive measure, but as a way of closing the loop between action and consequence.