A workflow ontology audit asks one question in several forms: does the system know what is actually happening to work?
Do not start with tooling. Start with a single workflow that matters: customer escalations, implementation handoffs, product launches, finance approvals, security reviews, support-to-engineering bugs, data requests, roadmap intake, contract reviews, incident response. Pick one where delay hurts.
Then inspect the work objects.
What enters the system? Name the object types. Request, task, incident, project, approval, exception, dependency, handoff. If everything is called a ticket, ask what differences the word is hiding.
What states can those objects occupy? List the actual states, not only the tool labels. New, triaged, ready, active, waiting, blocked, under review, approved, rejected, done, reopened. For each state, define what it means, who owns it, what action moves it forward, and when it becomes stale.
Where does status drift from state? Sample recent items. For each one, compare the visible label to operational reality. Active but untouched. Blocked but no unblocker. Done but not accepted. Assigned but ownerless. Pending approval but missing evidence. These examples are the audit's evidence, not anecdotes to explain away.
Who owns next-state movement? Every active work object should have one accountable owner. Identify requesters, contributors, approvers, decision owners, and queue owners separately. If the same field is carrying all of those meanings, the workflow is under-modeled.
How do queues behave? Measure what entered, what left, what aged, what breached, what was rejected, and what went stale. Separate untriaged demand from accepted work, waiting work, expedited work, and deferred work. A queue that cannot distinguish those categories is storing work without understanding it.
How are blockers, dependencies, and exceptions recorded? Pull examples. Does the workflow say what prevents progress, who can remove it, what date matters, what rule was violated, and what escalation path exists? If the answer is mostly in Slack, the system is not the system.
How do approvals work? For each gate, name the decision, approver role, evidence required, allowed outcomes, service expectation, and bypass rule. Remove gates that have no threshold. Strengthen gates where risk is real but evidence is weak.
How do handoffs transfer ownership? A good handoff has context, current state, next action, done condition, receiver acceptance, and a reopen path. If transfer happens through mentions and hope, expect work to disappear.
What does done mean? Check closed work. Was it accepted by the right person or system? Is the result recorded? Can it be reopened? Do downstream teams agree that it is done? "No one is currently complaining" is not a closure standard.
What would an AI agent misunderstand? This is the sharpest audit question. Give a hypothetical agent read access and ask what it could safely infer. Could it tell who owns the next action? Could it identify stale work? Could it route a request? Could it distinguish waiting from blocked? Could it know when to escalate? Wherever the answer is no, humans are probably compensating today.
The output of the audit should be concrete:
- A cleaned list of work objects.
- State definitions and transition rules.
- Ownership and approval rules.
- Queue standards.
- Blocker, dependency, and exception definitions.
- Handoff contracts.
- Done criteria.
- State drift examples.
- AI-readiness gaps.
- Three workflow fixes to make first.
Do not try to fix every workflow at once. Pick the workflow where ambiguity creates the most delay, risk, or customer pain. Tighten the ontology there. Prove that better definitions reduce meetings, escalations, rework, and stale work.
The goal is not a perfect model. The goal is a workflow the company can trust.
This is part 10 of 10 in Workflow Ontology.
