Every executive is buried in updates. Status reports, weekly recaps, project dashboards, check-in emails, Slack threads. And almost all of them share the same failure: they describe activity instead of meaning.
What changed. What it means. What decision is needed.
That is the entire job of an update. Three things. Most updates deliver none of them. At executive scale, the update is not a courtesy note. It is part of the operating system: the mechanism that turns dispersed facts into shared judgment, decisions, and follow-through.
---
Why Updates Fail
The standard update failure looks like this: a chronological list of what the team worked on this week, with green checkmarks on completed items and vague references to "ongoing" work. It tells the reader what happened. It tells them nothing about why it matters.
The problem is partly incentive structure. Updates heavy on activity are easy to write — you just list things. Updates that require judgment about what things mean are harder and feel riskier. Saying "we're on track" is easier than explaining why a specific technical decision will likely affect the timeline in ways that won't be visible for three more weeks.
The other problem is that most organizations have trained executives to expect activity reports. "What did you do?" is the default question. It's a weak question — it measures effort, not impact — but it's the one most likely to be asked, so it's the one people answer.
---
What Good Looks Like
A useful update answers three questions and includes enough operating signal to make action possible:
What changed? Not a log of activities — a precise statement of what is different now than it was at the last update. New information, new constraints, new risks, new opportunities. This is the factual basis.
What does it mean? This is the judgment call. If the changed information doesn't change anything, say that and why. If it creates a risk, say what kind. If it opens an option that wasn't there before, describe it. This is what separates an update from a log.
What decision is needed? Not "I'll keep you posted." A specific decision, owned by a specific person, with a specific time frame. If no decision is needed, say that explicitly — and say why.
Then add the operating basics:
- Metrics: which number moved, against what baseline?
- Narrative: why did it move?
- Risks: what could break next?
- Decisions needed: who decides, by when?
- Owner/date: who owns the next action and when will it be reviewed?
This is the difference between status and signal.
---
Before and After
Bad update:
> Enterprise onboarding is progressing. We completed enablement sessions, continued API work, and are coordinating with Support on open issues. We remain focused on launch readiness.
Better update:
> Enterprise launch risk moved from yellow to red. Two of five pilot customers cannot complete onboarding because SSO mapping fails for subsidiaries. Engineering can fix it by delaying analytics work one sprint. Decision needed by Thursday: protect launch date and accept manual onboarding for two customers, or delay launch one week and preserve trust. Owner: Priya. Review: Friday WBR.
The second update is not longer because it is more bureaucratic. It is longer because it says what matters.
---
The Test
Read your last five updates — or ask your team for their last five. For each one, try to answer: what would have to be true for the executive reading this to need to act? If you can't answer that question, neither can the reader.
The better test: after reading an update, does the executive know more about what actually matters? Not what happened — what matters. These are not the same thing.
---
If Your Updates Are Bad, Your Operating System Is Bad
Bad updates are often a leadership design failure, not a frontline writing failure. People send activity logs when the operating cadence asks for activity logs. They hide risk when leaders punish risk. They omit decisions when no one knows where decisions are actually made.
If updates are consistently weak, inspect the system:
- Are the metrics clear enough that teams know what moved?
- Is there a regular operating review where risks and tradeoffs are resolved?
- Do decision rights exist, or is every issue informally escalated?
- Are owners and dates tracked, or does follow-up depend on memory?
- Does bad news get met with curiosity or punishment?
The update format will not save a broken cadence. It will reveal one.
---
Building the Update Culture
The executive sets the standard. If you receive updates that are all activity and no meaning, say so — and show what good looks like. Rewrite one for the person. Show them the difference between "we had three customer calls this week" and "we spoke with three customers; two are likely to churn because of the API change shipping next sprint; we should discuss whether a retention offer is warranted."
The second version takes more time to write. It also requires the writer to actually think about what is happening, not just report that it happened. That thinking is the work.
A manager's output is the output of their organization, not their personal activity. The update should reflect the organization's state — not the writer's busyness.
A simple operating review format:
- Scorecard: the few metrics that matter.
- Variance: what changed and why.
- Risks: what could break before the next review.
- Decisions: what needs executive judgment now.
- Commitments: owner, date, expected result.
When updates shift from activity logs to situational assessments, the executive layer becomes genuinely useful: a place where judgment is applied to real information, not a ceremonial checkpoint.
