Requirements before models: scoping an AI automation that ships
Most AI automation efforts stall in scoping, not modeling. A requirements-engineering approach to mapping operator workflows to discrete AI capabilities, with the human checkpoints that let the result actually ship.
The frame
Most AI automation efforts that fail do not fail at the model. They fail at the requirement. A team that has not decomposed the operator workflow before choosing a capability ends up building an impressive demo for a step that nobody actually does, or automating a step the operator was not asking to automate, or both. The boring middle layer between "we should use AI for this" and "the model is responding" is requirements engineering. It is not new work; it is old work, applied to a new substrate.
Start with the operator, not the model
The first artifact is not a system diagram. It is an operator interview. What is the work, who does it today, how often, with what inputs, against what success criteria, and what happens when it goes wrong. The interview is structured but not scripted. The output is a written description of the workflow, in the operator's language, that the operator will read and correct. If the operator does not recognize the description, the rest of the engagement is built on sand.
Decompose the workflow before naming a capability
A workflow is not one task. It is a sequence: gather inputs, validate them, resolve ambiguities, produce an output, route the output, capture a record. Each step has a different shape. Some steps are well-defined transformations. Some require judgment. Some require fetching external data. Some require explicit approval. The decomposition has to happen before any capability is named. A team that names "an LLM" or "an agent" before the decomposition is choosing a tool to swing at a problem they have not described.
Match capabilities, not vendors
Each step in the decomposed workflow maps to a capability, not a vendor. Retrieval. Classification. Extraction. Summarization. Reasoning over a small context. Reasoning over a large context. Tool use. Each capability has known failure modes, known cost characteristics, and known accuracy bounds. The mapping step asks: which capability does this step need, what is the acceptable error rate, and what does failure look like in the downstream system. Vendor selection is a downstream decision, not the work itself.
Confidence thresholds and human checkpoints
A capability that is correct ninety-five percent of the time is not the same as a workflow that is correct ninety-five percent of the time. Errors compound across steps. The requirements step has to specify, for each capability call, what confidence threshold is required, what evidence the system has to surface, and where the human checkpoint sits. Without those decisions, the system either over-automates and produces silent failures, or under-automates and produces a copilot the operator never opens. Both outcomes are common.
What this is not
This is not a model selection guide. The model is downstream. This is not a prompt engineering tutorial. The prompt is downstream. This is the step before either of those, and it is the step most projects skip. Skipping it is what produces the gap between an impressive AI demo and an automation that an operator actually uses on a Wednesday morning when the rest of the work is already late.
Ownership before optimization: a brief on cloud cost programs that hold
Most cost programs fail because they start with a tooling decision instead of an ownership decision. A brief on the order that actually works.
Standing up a security program at a company that has never had one
A practical sequence for the first 180 days. What to instrument first, what to defer, and how to avoid the audit-driven trap that consumes the next year.
Segmenting OT networks across thirteen plants without stopping production
OT security is not IT security with a different scope. A briefing on what segmentation actually costs when the floor is running and the controls run on twenty-year-old PLCs.
Tell us what’s pressing.
Brief us in a few sentences. We read everything that comes through this form, and reply within two business days. Calls happen only after a fit looks plausible. Your time is respected.
- 01ReadWithin 2 business days
- 02ReplyA short, direct response, not a sequence
- 03CallOnly after written exchange suggests fit