The business ontology audit is not a philosophical exercise.
It is a way to find where the company’s systems disagree about reality before those disagreements distort decisions, workflows, metrics, customers, finance, or AI.
Use a simple scoring system:
- 0 — unclear or absent
- 1 — partially defined, inconsistently used
- 2 — defined, owned, embedded, and evidenced
The score is less important than the gaps it exposes.
1. Core objects
List the ten to fifteen core objects of the business. Start with customer, account, legal entity, user, workspace, product, SKU, contract, invoice, payment, entitlement, usage event, support issue, employee, team, workflow, obligation, and decision. Adjust for your business model.
For each object, ask:
- What does it mean in plain language?
- Which system creates it?
- Which system is authoritative for which decision?
- What is its unique identifier?
- What are its lifecycle states?
- Who owns it?
- Who can create, edit, merge, archive, or override it?
- Which workflows and metrics depend on it?
Score each object. Any object touching money, customer promises, access rights, compliance, executive metrics, or AI action should not remain at 0.
A useful audit sample is five rows, not fifty: customer, product, contract, invoice, and support issue. If those five cannot be traced across CRM, ERP or billing, product systems, support, the warehouse, workflow approvals, and AI context, the company has found the operating layer that needs attention first.
2. Definitions
Pick the twenty terms that create the most operating drag. Typical examples: customer, active user, churn, renewal, expansion, ARR, booking, revenue, margin, strategic account, qualified opportunity, product, entitlement, implementation complete, healthy, escalated, owner, approved, done.
For each definition, ask:
- What is included?
- What is excluded?
- Who owns the definition?
- Where is it documented?
- Where is it implemented?
- Which teams use a different version?
- Which decisions depend on it?
A definition that only exists in someone’s head is a 0. A definition documented but not embedded in systems is usually a 1. A definition that is owned, implemented, tested, and used consistently is a 2.
Push for examples. If “active customer” means paid invoice in finance, active workspace in product, open renewal in CRM, and eligible support organization in the help desk, do not average those into one vague definition. Name the decision each version supports and stop using one label as if it meant one thing.
3. Relationships
Map the relationships that make the business work:
- customer to legal entity;
- account to contract;
- contract to product;
- product to entitlement;
- entitlement to user or workspace;
- invoice to contract;
- usage to account;
- support issue to SLA;
- employee to role;
- role to approval authority;
- workflow to decision;
- obligation to owner.
Ask where those relationships live and whether systems agree.
Look for concrete breaks: an invoice line that cannot be tied to a product family, a usage event that cannot be tied to a contract, a support ticket that cannot find the SLA, a workflow approval that cannot find the employee’s current role, or an AI tool that cannot tell whether a document is policy, draft, or customer-specific exception.
The key test: can a workflow or AI agent safely use this relationship without a human interpreter?
If not, score it low and name the operating risk.
4. Metrics
Select executive and operating metrics. For each, create a metric contract:
- decision supported;
- plain-language definition;
- formula;
- objects involved;
- source systems;
- inclusion/exclusion rules;
- time window;
- owner;
- refresh cadence;
- known caveats;
- downstream workflows;
- last changed date.
Metrics with unclear definitions should not drive compensation, planning, automation, or AI recommendations.
If ARR, churn, gross margin, active usage, implementation complete, customer health, forecast coverage, or support SLA attainment cannot pass this contract, the issue is not “reporting quality.” The object model is not stable enough for the operating decision.
5. Sources of truth
For each object and definition, identify source of truth by decision. Avoid vague slogans.
The ERP may own invoice status. Billing may own subscription plan. Product may own usage. CRM may own opportunity stage. Support may own SLA execution. The warehouse may own reconciled reporting. Workflow tools may own approval status. Contract systems may own obligations.
Ask:
- Where is the source of truth?
- What consumes it?
- What happens when another system disagrees?
- Who resolves conflict?
- How are changes communicated?
A company can have many sources of truth. It cannot have ambiguous truth for the same decision.
6. Decision rights and action rights
Ontology includes who can act.
For important workflows, ask:
- Who owns the decision?
- Who provides input?
- Who approves?
- Who can override?
- What evidence is required?
- What is the escalation path?
- What can automation do?
- What can AI do?
- What requires human review?
If approval authority lives in policy docs but not in workflow tools, the system is underdesigned. Test the actual paths: discount approval, refund approval, credit memo, purchase order, entitlement override, contract exception, customer escalation, production access, data export, and AI external send.
7. Freshness and trust
For knowledge and data used in decisions or AI workflows, ask:
- Who owns freshness?
- How often is it reviewed?
- How is stale content marked?
- Can AI use it for action?
- What is the retirement process?
- Are drafts and approved sources distinguishable?
Freshness is part of quality. A stale truth can be more dangerous than no truth.
8. Spreadsheets and shadow systems
List recurring spreadsheets used to reconcile, plan, approve, or report. Do not shame them. Study them.
For each spreadsheet, ask:
- What official system gap does it fill?
- Which object or relationship does it model?
- Which decisions depend on it?
- Who maintains it?
- Should it become an official source, a workflow, a warehouse model, or be retired?
Spreadsheets are often the best map of ontology debt.
9. AI readiness
For each AI workflow in production or pilot, ask:
- What objects does it need?
- What definitions does it rely on?
- What sources can it trust?
- What actions can it take?
- What rights constrain it?
- What evals test ontology correctness?
- What logs show source use and failure cause?
Do not scale AI workflows that cannot pass this audit. A renewal agent, finance collections agent, support triage agent, product analytics agent, or workflow approval agent should prove it can identify the right objects, use current sources, respect action rights, and explain failure causes before it receives broader scope.
10. Prioritized debt register
End with a short register. For each gap, record:
- issue;
- operating risk;
- affected systems;
- affected decisions;
- affected workflows;
- owner;
- next decision;
- target state.
Prioritize gaps that touch money, customer obligations, access rights, executive metrics, automation, or AI actions.
The audit output should not be a massive report. It should be a ranked list of operating model fixes.
The company’s hidden data model already exists.
The audit makes it visible enough to improve.
