Vanity metrics are the comfortable lies teams tell themselves. "We had 50,000 users this month." "Our NPS is up 8 points." "Page views increased 40% year-over-year." These numbers feel good. They look great in a board deck. They tell you almost nothing.
The test of a useful metric is simple: does knowing this number change what you do?
The Most Common Vanity Metrics
Total registered users — tells you nothing about whether those users are alive, active, or valuable. A product with 500,000 registered users and 2,000 monthly actives has 498,000 people who found a reason to leave.
Page views / session duration — these measure attention, not value. A user who spends 10 minutes on a page reading something unhelpful isn't success. A user who gets what they need in 90 seconds and leaves is.
App downloads — the installation is step one of a hundred. What matters is what happens after.
Followers / likes — these measure reach, not impact. A tweet with 500,000 impressions and zero engagement is a billboard in the desert.
Build the Metric Tree Before the Dashboard
A useful metric usually sits inside a chain:
Outcome → drivers → leading indicators → guardrails → thresholds
Example:
- Outcome: net revenue retention
- Drivers: adoption, support quality, product fit, executive sponsorship
- Leading indicators: onboarding completion, active seats, unresolved critical tickets, QBR attendance
- Guardrails: discounting, support load, implementation failure, refund requests
- Thresholds: what counts as healthy, watch, and intervention
This does two things. First, it prevents random dashboard sprawl. Second, it forces the team to state the causal theory behind the numbers. You may be wrong. Good. Now you have something to test.
A useful companion question to the checklist above: does knowing this number change what you do? If yes, ask the next question: what would you do, at what threshold, and who owns the action? That answer belongs in the dashboard more than the number itself.
