End-of-quarter and end-of-year reviews are usually backward-looking report-outs that tell you what already happened. That's marginally useful for record-keeping and useless for running.

The real job of a quarterly review is calibration: did our assumptions hold? Did we allocate resources correctly in retrospect? Did our operating system surface problems early enough to act on them?

Most organizations separate accountability reporting from learning review and then skip the learning review entirely — because accountability reporting is easier to produce and feels more serious. This is a mistake that compounds quarter over quarter.

How to Run a Learning Review That Works

A learning review has a different format than an accountability report. It should answer four questions:

What did we expect to happen that didn't? For each significant miss: what were the assumptions? What information did we have or not have at the time? Was the failure in the planning process (wrong assumptions) or in the execution process (right assumptions, bad follow-through)? These are different problems with different fixes.

What happened that we didn't expect? Surprises — good and bad — are usually the most valuable learning material. A feature that landed unexpectedly well. A technical approach that created more problems than it solved. A team dynamic that emerged under pressure. These are patterns that won't show up in any metrics dashboard but that will determine the next quarter's outcomes if they're not named.

Did our operating system surface problems early enough? This is the meta-question about the quarter's execution. Were the weekly reviews producing real signal? Were decisions being made quickly enough? Were blockers escalating appropriately? Or was the team discovering problems in the quarterly review that should have been visible three months earlier?

What should we do differently next quarter based on this? The learning review is only worth running if it produces changed behavior. If the answer to the first three questions doesn't change any commitments in the next cycle, the learning review was a debrief, not a review.

Annual Reviews and the Accumulation Problem

Annual reviews face a compounding version of the quarterly problem: twelve months of history is too much to process in one session, and the temptation to produce a comprehensive report is almost irresistible.

The better approach is to run quarterly learning reviews throughout the year and use the annual review as an aggregation of those, not a standalone exercise. The annual review should answer: what did we learn this year that changes how we should approach next year? Not a comprehensive report — a focused synthesis.

The question to anchor the annual review: if we could only change three things going into next year based on what we learned this year, what would they be?

That's the output. Everything else is documentation.