A value chain is where strategy stops being abstract.
The phrase can sound grand, but the practice is plain: list the capabilities required to satisfy a user need, then connect the dependencies. What has to exist for the user to get the value? What depends on what? What breaks the promise if it fails?
Let’s use a worked example: AI-assisted customer support for a B2B software company.
The user is not “the company.” Pick one. A support agent needs to resolve complex customer issues faster without giving wrong answers. The customer needs a correct answer with less waiting. The support manager needs throughput without quality collapse.
Start with the agent’s need: resolve issue accurately.
Under that need sit visible capabilities: issue diagnosis, answer generation, policy guidance, customer communication, escalation, and resolution logging. Under those sit less visible dependencies: customer identity, account tier, product usage, open incidents, known bugs, support history, contractual obligations, permission rules, knowledge base, release notes, engineering ownership, model access, retrieval, evaluation, QA sampling, and audit trails.
Already the conversation has changed. The “AI support bot” is not one thing. It is a chain.
Now place the components roughly by evolution.
Model access may be productized and moving toward commodity. There are many providers, APIs, and hosted options. That does not make it trivial, but it means model access alone is rarely the durable differentiator.
Your support history may be custom. The data exists, but it is messy, local, and full of tacit meaning. Your policy logic may be even more custom because it encodes promises made by sales, legal constraints, customer tiers, and judgment about when to bend.
Your retrieval pipeline may be product-like in tooling but custom in tuning. Evaluation may be emerging. Human escalation is probably a practice with uneven maturity. Audit logging may be commodity if you use mature platforms, or dangerously custom if every workflow invents its own trace.
This is the value of the map: it prevents category mistakes.
If the company treats model choice as the whole strategy, it will over-negotiate the least differentiating layer. If it treats messy support context as an afterthought, the agent will sound confident and wrong. If it pushes the entire system through a heavyweight enterprise software process, it will learn too slowly. If it ships without governance, it may create trust damage that takes months to repair.
The map shows multiple operating modes in one initiative.
Commodity layers need reliability, cost control, vendor leverage, and standards. Product-like layers need integration quality and fit-for-purpose evaluation. Custom layers need ownership, learning loops, and domain judgment. Genesis areas need small experiments and honest tolerance for failure.
Now ask the operator questions.
Where should we build? Probably around proprietary context assembly, workflow fit, policy interpretation, and feedback loops if they affect customer experience and learning.
Where should we buy? Model access, generic infrastructure, logging primitives, identity, and parts of retrieval may not deserve custom ownership unless there is a specific constraint.
Where should we slow down? Any place where a wrong answer creates legal exposure, trust damage, or customer harm.
Where should we move fast? Internal summarization, draft responses with human review, routing assistance, knowledge gap detection, and QA sampling may be good early loops.
Where is the moat? Not “we use AI.” The moat, if any, is the company’s ability to connect context, workflow, policy, and learning in a way competitors cannot easily copy.
Anti-patterns become easier to name:
- Tool-first mapping: starting from the vendor feature list instead of the support need.
- Model mysticism: treating the model as the strategy while ignoring context and workflow.
- Data hand-waving: assuming the needed context exists because the company owns the systems.
- Governance theater: creating committees that do not map to actual risk points.
- Automation hunger: removing humans before the map shows which judgments are mature enough to automate.
A value chain makes these failures visible before they become expensive.
The practical move is to map one workflow end to end with people who actually touch it. Not just leaders. Bring the support agent, the systems owner, the policy person, the product manager, the data person, and whoever handles escalations when the official process fails. Ask them what the user needs and what breaks.
You will learn more from that hour than from another AI strategy deck.
Operator artifact: create a dependency table with four columns: capability, upstream dependencies, failure mode, current maturity. If a capability has no named owner or no known failure mode, you have found an operating risk.
The map does not solve the problem. It shows what kind of problem you are solving.
This is part 4 of 10 in Wardley Mapping for Operators.
