“Let’s solve it” is not always the right next sentence.
Sometimes the right move is a fast fix. Sometimes it is a deep dive. Sometimes it is research. Sometimes it is an experiment. These modes are different, and confusing them creates waste.
A fast fix treats a known, bounded issue.
A deep dive investigates a recurring or consequential problem.
Research reduces uncertainty before deciding.
An experiment tests a plausible answer in the real world.
Good operators choose the mode before mobilizing the organization.
Fast fix
Use a fast fix when the issue is clear, reversible, low blast radius, time-sensitive, and inside the owner’s authority.
Examples: rollback a broken deploy, correct a confusing email, patch a known bug, remove an unnecessary approval, update a misleading help article, unblock a customer with a standard exception.
The fast fix should have a tight owner and short loop: action, verification, communication, reopen trigger. Do the thing. Verify the result. Communicate if needed. Move on.
Do not attach a root-cause ceremony to every fast fix. If the issue is isolated and low-cost, the cost of analysis may exceed the cost of the problem.
But add a tripwire: if it recurs, changes category. “Fix now; if this happens again twice this month, open a deeper investigation.”
Deep dive
Use a deep dive when the problem is repeated, ambiguous, expensive, cross-functional, high-stakes, or resistant to previous fixes.
Examples: repeated incidents in the same subsystem, chronic missed launches, customer escalations from the same segment, declining forecast accuracy, repeated quality escapes, team conflict that process has not solved.
A deep dive should not mean endless analysis. It needs a question, scope, owner, deadline, evidence list, decision forum, and output decision. Without the decision forum, the deep dive becomes research inventory.
Bad deep dives produce impressive documents and no change. Good deep dives explain the constraint and force the next tradeoff.
The output should be something like: “The root issue is not QA coverage; it is that product requirements are changing after engineering estimates. Decision needed: freeze scope by milestone or accept launch-date volatility.”
Research
Use research when the problem is genuinely uncertain and action would mostly be guessing.
Examples: market sizing, customer willingness to pay, unfamiliar technical approach, competitor behavior, regulatory interpretation, user need, vendor risk.
Research should reduce uncertainty relevant to a decision. It should not become a way to postpone the decision.
A good research brief says:
- What decision will this inform?
- What do we already know?
- What assumptions matter most?
- What evidence would change the recommendation?
- How long will we spend?
- What confidence level is enough?
AI makes research faster, but also makes fake confidence cheaper. Source quality, assumptions, and gaps matter more than polish.
Experiment
Use an experiment when you have a plausible solution but need real-world evidence.
Examples: pricing test, onboarding change, AI support workflow, sales qualification rule, product prototype, internal process redesign, new escalation path.
An experiment is not “try stuff.” It is a bounded learning loop.
Define:
- hypothesis;
- cohort;
- duration;
- success metric;
- guardrail metric;
- owner;
- stop condition;
- decision after the test.
Sanjit Biswas’s “cheap experiment” idea is useful here. Small failures are valuable if they teach and stay contained. The goal is not to avoid being wrong; it is to be wrong cheaply and learn quickly.
Mode confusion
The most common failure is using one mode while pretending to use another.
A fast fix is dressed up as a deep dive because a senior person is watching. The team spends two weeks on a problem that needed one owner and one rollback.
A deep dive is treated as a fast fix because the organization is impatient. The team patches a recurring issue again and calls it progress.
Research becomes a delay tactic. Everyone knows a decision is needed, but more information feels safer than naming the tradeoff.
An experiment becomes an unmeasured rollout. The team “tests” something with no hypothesis, no guardrail, and no decision point.
Mode clarity prevents these errors.
The mode selection questions
Ask:
- Do we understand the failure mode?
- Is the action reversible?
- What is the blast radius?
- Is time the binding constraint?
- Is the problem repeated?
- Are we missing evidence?
- Can we learn safely by trying?
- Who has authority to choose?
Then choose the lane:
Fast fix: known, reversible, contained, urgent.
Deep dive: repeated, ambiguous, costly, systemic.
Research: unknowns block a decision.
Experiment: plausible solution needs real-world proof.
Escalate: decision exceeds authority or blast radius.
Stop: cost exceeds value.
The operator’s sentence
Every problem should be able to carry a mode sentence:
“We are treating this as a fast fix because the failure mode is known and rollback is safe.”
“We are treating this as a deep dive because it has recurred three times and the previous fixes did not hold.”
“We are treating this as research because the decision depends on customer willingness to pay, which we do not know.”
“We are treating this as an experiment because the idea is plausible but needs field evidence.”
That sentence aligns the room. It keeps speed from becoming recklessness and rigor from becoming theater.
Make the mode visible to stakeholders
Mode selection should be explicit because stakeholders often expect different things from the same problem. A customer may need immediate containment. An executive may want a durable answer. A team may need permission to investigate. A frontline owner may just need authority to act.
If you do not name the mode, people will judge the work by the wrong standard. A fast fix will look shallow to someone expecting a deep dive. A research pass will look slow to someone expecting execution. An experiment will look indecisive to someone expecting commitment.
Say the mode out loud, then say what it does and does not promise. “This fast fix restores the customer path; it does not answer whether the architecture needs to change.” That distinction prevents a lot of false alignment.
