The fastest way to build bad automation is to treat automation as one category.

It is not.

A script that syncs invoices is not the same thing as an AI agent that reviews contracts. A rule that routes support tickets is not the same thing as a model that summarizes a customer complaint. A human approval queue is not a failure of automation. It is often the part that makes automation safe enough to use.

Most teams collapse all of this into one question: "Can we automate this?"

That question is too vague to be useful.

The better question is: "Which parts of this workflow should be deterministic, which parts can be probabilistic, and where do we need human judgment?"

That is the architecture question.

The three kinds of work inside automation

Most useful automation contains three different types of work.

First, deterministic work. This is work where the same input should produce the same output every time. If a payment status is paid, update the invoice. If an employee is terminated, revoke access. If a form field is missing, reject the submission. This work should be handled with code, rules, schemas, APIs, validations, and tests.

Second, probabilistic work. This is work where the input is messy and the answer is often a judgment call. Classify this email. Extract the obligations from this contract. Draft a response. Summarize this call. Decide whether this customer complaint is billing, product, or legal. AI is useful here because language and ambiguity are part of the job.

Third, accountable work. This is work where someone has to own the decision because the consequences matter. Approve a refund above a threshold. Send a sensitive customer email. Change payroll. Escalate a legal issue. Ban a user. A model can assist, but the system needs a human gate or a very strong control model.

Good automation architecture starts by separating these types of work instead of pretending they are all the same.

The automation classification matrix

Use this matrix before building anything:

| Work type | Best owner | Use AI? | Typical controls |

|---|---|---|---|

| Exact rule | Code | Usually no | Tests, validation, monitoring |

| Data movement | Code/workflow engine | No, unless data is messy | Idempotency, dedupe, logs |

| Text classification | Model inside workflow | Yes | Confidence thresholds, sampled review |

| Information extraction | Model plus schema validation | Yes | Structured output, fallbacks, human review |

| Drafting | Model plus human or policy gate | Yes | Approval queue, tone/policy checks |

| Recommendation | Model plus accountable owner | Yes | Explanation, alternatives, audit trail |

| Irreversible external action | Human or tightly controlled code | Rarely direct | Approval, least privilege, rollback plan |

This matrix prevents a common mistake: using AI because it is available, not because the work actually needs judgment.

The first design move is classification

Before prompts, tools, or vendor selection, classify the work.

A support queue might contain three different jobs: exact routing for known product IDs, ambiguous reading of angry customer language, and accountable decisions about refunds or legal threats. Those jobs should not share the same control model.

This is the manifesto for the whole series: automation architecture starts when you stop asking one tool to own every kind of decision.

Example: support ticket routing

Bad design: "Use AI to handle support tickets."

Better design:

  • Code receives the ticket and stores a durable event.
  • A model classifies the ticket into billing, product, bug, legal, security, or unknown.
  • The model must return structured JSON with category, confidence, short rationale, and urgency.
  • If confidence is above 0.85 and category is not legal or security, route automatically.
  • If confidence is below threshold or category is legal/security, send to review.
  • Every decision is logged with model version, prompt version, confidence, and final route.

The useful question is not whether the ticket was "handled by AI." It is whether each part of the routing workflow had the right owner.

The operator's rule

Do not ask, "Can AI do this?"

Ask:

  • What must be deterministic?
  • Where does ambiguity actually exist?
  • What actions are reversible?
  • Where do we need a human gate?
  • What must happen before launch?
  • Who owns it after launch?

That is the difference between a vague automation idea and a system people can trust.