Metrics need narratives because metrics do not explain themselves.
A number can tell you churn increased. It cannot tell you whether churn increased because onboarding failed, implementation capacity collapsed, a bad-fit segment grew, a competitor improved, pricing changed, a product promise broke, or success managers stopped escalating risk early enough.
Without narrative, the room fills in the blanks with politics and prior beliefs. Sales blames product. Product blames customer fit. CS blames implementation. Finance blames discounts. Everyone can be partly right and still leave without an operating decision.
Narrative is not storytelling in the cosmetic sense. It is a causal claim. It says: here is what we think happened, here is the evidence, here is what would disconfirm it, and here is the decision implied if the claim is true.
The narrative should be tight enough to argue with. 'Pipeline is soft because macro conditions remain challenging' is not a review narrative. 'Pipeline from security buyers is down 22% because our two highest-intent channels saturated and the new compliance narrative is underperforming in mid-market; enterprise expansion remains healthy' is something the room can test.
Metrics also discipline narratives. A persuasive story that does not survive the data is just a lobby. The best operating reviews pair the two: the metric identifies the variance; the narrative proposes the cause; the room challenges both; the decision follows.
A good packet should separate fact, interpretation, and ask. Fact: renewal risk increased in cohort X. Interpretation: the issue is slow time-to-value after implementation handoff. Ask: approve two temporary implementation slots and change onboarding entry criteria for the next cohort. That is reviewable.
The standard is not perfect certainty. Operators rarely get that. The standard is decision-grade clarity: enough evidence, a named assumption, a clear tradeoff, and a next observation that will tell us if we were wrong.
