Bad decisions often begin as bad definitions.
The meeting looks normal. The dashboard looks polished. The data team has done its work. Everyone is arguing from numbers. But the argument is already contaminated because the underlying words do not mean the same thing to everyone in the room.
Customer. Active. Churn. Product. Revenue. Margin. Owner. Qualified. Strategic. Done.
These words feel obvious until they have to drive a decision.
Take “customer.” Sales may mean the account they are trying to expand. Finance may mean the legal entity that receives invoices. Product may mean the workspace where users collaborate. Support may mean the organization attached to incoming tickets. Data may mean the stitched customer dimension in the warehouse. Legal may mean the contracting party. AI may retrieve all of these and blend them into one confident answer.
Now ask a simple question: how many customers do we have?
The answer depends on the definition.
If the company does not know which definition applies to which decision, the debate becomes political. People start defending the number that supports their local reality. Sales wants a customer count that reflects commercial relationships. Finance wants auditability. Product wants usage truth. Support wants service obligations. Leadership wants a clean trend line.
The problem is not that one group is right and the others are wrong. The problem is that the company is using one word for several related objects.
Bad definitions create bad decisions because they hide tradeoffs.
Consider “active user.” A login-based definition is easy to measure but often weak. A core-action definition is better but requires product judgment. A billing-based definition may matter for revenue but miss adoption risk. A support-based definition may show engagement only when something is broken.
If the company uses a loose active-user metric to make product investment decisions, it may overfund features people open but do not value. If it uses the same metric to trigger customer success workflows, CSMs may chase accounts that look inactive but are healthy under a different usage pattern. If AI uses the metric to generate account briefs, it may present noise as insight.
The definition becomes an operating choice.
Or take “churn.” Logo churn, revenue churn, product churn, seat churn, workspace churn, voluntary churn, involuntary churn, downgrade, contraction, non-renewal, cancellation, and payment failure are not interchangeable. Each points to a different operating response.
If involuntary payment failures are counted the same way as product-driven cancellations, product gets blamed for a billing operations problem. If a customer cancels one product line but expands another, the label “churned” may be technically true and strategically misleading. If churn is measured at the parent account level, regional product failures can disappear inside a global relationship.
Definitions shape accountability.
This is why metric debates are often really ontology debates. People think they are arguing about a formula. They are arguing about what exists, what counts, what relationships matter, and who owns the outcome.
The same pattern appears in finance.
What is revenue? Bookings? ARR? GAAP revenue? Invoiced revenue? Collected cash? Committed contract value? Expansion pipeline? Each number is useful for a different decision. Confuse them and the company starts making mistakes: hiring against bookings that will not convert to cash, celebrating ARR that finance cannot recognize, promising margin improvement while implementation costs are hidden in another system.
Finance teams usually understand this better than most operators because accounting forces definitions. But even finance can diverge from the rest of the operating system. If the ERP’s customer hierarchy does not map cleanly to CRM accounts, product workspaces, and support organizations, the company can be financially precise and operationally confused.
Bad definitions are not only a reporting issue. They break workflows.
A workflow automation needs conditions. Who qualifies for enterprise onboarding? Which contracts require legal review? Which refunds need approval? Which customers get incident notifications? Which support tickets count against SLA? Which product events trigger a sales signal? Which accounts enter the renewal-risk motion?
Every automation contains definitions. If the definitions are wrong, the automation institutionalizes the wrong behavior.
This is where spreadsheets become dangerous. They often exist because the official systems cannot express the operating definition people actually need. A finance spreadsheet maps products differently for planning. A RevOps spreadsheet patches account hierarchies. A CS spreadsheet tracks “real health” because the health score is distrusted. A product spreadsheet maps features to packaging because entitlement logic is messy.
The spreadsheet is not the disease. It is evidence that the formal ontology is incomplete.
The fix is not to ban spreadsheets. The fix is to inspect what they know that the official systems do not.
Operators should build a habit: whenever a recurring debate happens around a number, stop and ask which definition is actually being debated.
For each important term, write the operating definition in plain language:
- What does it include?
- What does it exclude?
- Which system calculates or stores it?
- Who owns the definition?
- Who can change it?
- Which decisions depend on it?
- Which workflows use it?
- Which exceptions are allowed?
- What happens when systems disagree?
This does not need to become a grand dictionary project. Start with the terms that create rework, escalations, dashboard fights, customer confusion, compensation disputes, planning errors, or AI hallucinations.
One warning: do not let the data team become the sole owner of definitions.
Data teams can implement definitions. They can test them, model them, document them, and surface inconsistencies. But the definition of a customer, a product, a renewal, a qualified opportunity, or an active user is a business decision. It belongs to the operating leaders who live with the consequences.
The data team should not be forced to adjudicate the business model from Jira tickets.
Good definitions create decision speed. They reduce meeting fog. They make dashboards boring in the best way. They help AI retrieve the right context. They let workflows run without constant exception handling. They make accountability visible.
Bad definitions do the opposite. They make every number negotiable. They make teams distrust systems. They create local workarounds. They turn AI into a summarizer of contradictions.
Before blaming the dashboard, inspect the nouns.
The company’s decisions will only be as clear as the definitions underneath them.
