Operating reviews — the periodic deep-dives into how the organization is actually running — are where most leadership teams fail to see what they need to see.
The problem is almost always information architecture: the metrics reported up are the metrics that are easy to produce, not the metrics that reveal whether the system is healthy. The operating review becomes a confirmation of the leadership team's existing beliefs, not a genuine diagnostic.
This is structurally almost impossible to avoid without deliberate effort. The information that would reveal the problems is usually uncomfortable, politically sensitive, or just harder to gather than the metrics that are already available.
Why Most Reviews Miss These Questions
The reason most operating reviews don't get to these questions is that they're not designed to:
They're designed around available data, not necessary data. The review covers what's in the dashboards because that's what's easy to present. The questions above require looking at processes and behaviors, not just metrics. That requires different conversations, not just different charts.
They're too focused on the operational and not enough on the systemic. A lot of operating reviews are essentially project management reviews: are things on track, what's at risk, what's blocked. That's useful at the project level. It doesn't diagnose whether the operating system is healthy. A team can deliver on every project and still have a broken system that will eventually fail to deliver.
They don't create the conditions for honest reporting. The most valuable signal in an operating review is usually bad news: a team that is consistently overcommitted, a decision process that is creating bottlenecks, an information flow that is systematically broken. This news doesn't surface in reviews where the people delivering it have an incentive to make things look better than they are.
Running the Review So the Signal Actually Emerges
The format of the review matters as much as the questions. Specifically:
Don't lead with metrics. If you lead with the dashboard, the review becomes a metrics review. Lead with the questions above and use the metrics as evidence, not as the frame.
Bring only metrics that can change a decision. A good operating review does not need every KPI in the business. It needs the small set of measures that reveal system health or force a tradeoff. If a metric is interesting but would not change a decision, it probably belongs in a written update, not in the review.
A bad metric pack says: revenue, active users, uptime, support tickets, roadmap status — all green, all familiar, all presented the same way every month. A useful diagnostic pack says: enterprise deals are slipping because billing exceptions take fourteen days to approve; support tickets are down overall but severity-one escalations doubled in one segment; roadmap status is green only because three commitments were quietly removed mid-cycle. The second pack is less comfortable and much more useful.
Create permission to report bad news. The review chair should explicitly invite the bad news: "what is something we should know that we're not seeing?" If the organization has a culture where bad news is punished, the review will produce nothing useful regardless of its format.
Follow up on the follow-through from the last review. Operating reviews are themselves a cadence. If the commitments made in the last review weren't followed through, say so explicitly. An operating review that doesn't track its own follow-through is management theater.
Keep it diagnostic, not solutions-oriented. The operating review is for understanding the system, not for solving all the problems it surfaces. Resist the urge to turn every diagnosis into a task force. Name the problem, understand it, and decide whether and how to act on it — separately from the review itself.
