What it is: The skill of redesigning workflows around AI’s strengths instead of inserting AI into processes built for humans.
Why it matters: McKinsey tested 25 factors that predict AI ROI. Workflow redesign was the single biggest one. Companies that redesign get 2.8x the performance of companies that bolt on, and almost nobody does it.
How to build it: Four questions you can run on any process in 60 seconds, before you write a single line of automation. Details below, with a real case from a recruitment agency I worked with.
A recruiter I worked with at an HR agency was running candidate screenings the way most recruiters do. She’d jump on a Zoom with a candidate, run a 20-minute conversation, and try to take notes at the same time. After the call, she’d stare at a half-coherent set of bullet points and try to reconstruct what the person had said. Then she’d open the standard CV template the agency used for hiring managers, and copy the relevant pieces in by hand: work history, achievements, technical skills, the team she’d worked with. Format it. Send it on.
On a good day, she’d get four or five candidates through that pipeline.
When AI started getting useful, the agency did the obvious thing: they bought a meeting transcription tool. Now the recruiter could focus on the conversation instead of typing. After the call, she had a clean transcript instead of fragmented notes. Felt like a win.
The output stayed roughly the same. Maybe five candidates a day instead of four. The transcription saved her some attention during the call, but the work after the call hadn’t changed. She still had to read through a wall of text, find the relevant pieces, and copy them into the template by hand.
This is what bolt-on AI looks like. It feels like progress. The numbers say otherwise.
The pattern that’s killing most AI projects#
The bolt-on pattern is everywhere. A company has a workflow that exists for historical reasons: someone designed it for humans, ten years ago, with the tools and constraints of that time. The company decides to “use AI” and the obvious move is to find the most painful step in that workflow and stick AI into it. Same process, same handoffs, same outputs, just with AI doing what a human used to do at one point in the chain.
I see it constantly. A client onboarding process where the team used to fill in a 20-field intake form, now an AI fills it in from a discovery call transcript. A proposal workflow where someone used to write the first draft, now an AI writes it from a brief. A support queue where tickets used to be routed by a human reading them, now they’re routed by an AI doing the same job.
In every case, the company is genuinely trying. They paid for the tool. They trained the team. They measured the result. And the result is not nothing, exactly, but disappointing. A few percent faster. Occasionally less reliable. The team’s enthusiasm fades. Leadership starts to wonder whether AI was overhyped.
It wasn’t overhyped. The process was wrong.
McKinsey ran a study across nearly 2,000 organisations testing 25 different factors for their effect on AI’s contribution to profitability. Workflow redesign came out on top. Not the model. Not the data quality. Not the budget. The single biggest predictor of whether AI actually moved the needle was whether the company had restructured its workflows around what AI could do, and only about one in five companies had bothered. Deloitte called the bolt-on approach “weaponised inefficiency.” That’s a strong phrase, and I think it’s accurate.
What redesign actually looks like#
Back to the recruitment agency.
After we worked through the process together, the redesigned version looked nothing like the bolted-on version. AI joined the screening call automatically. As the call progressed, it transcribed in real time. When the call ended, a second AI step pulled structured information out of the transcript, not the whole conversation, just the specific things that matter for a CV: companies the candidate had worked at, roles, achievements, the team they’d been part of, their responsibilities.
That structured set of fields then went straight into the agency’s CV template: populated, formatted, ready to send. The recruiter looked at the output, confirmed it matched what she remembered from the call, made small edits where the AI had missed something, and clicked approve. Done.
Same recruiter. Same calls. Same template. Different process.
The output went from four or five CVs per day to eight to ten. Roughly double, sometimes more. And the recruiter’s job was actually better: she was using her judgment on whether the candidate was real and a good fit, not reconstructing notes from memory and copy-pasting them into a Word document.
Here’s what made it work. The redesign didn’t ask, “which step in our current process can AI do?” It asked, “what are we actually trying to produce, and how would we build that if we started from scratch knowing AI exists?” The answer wasn’t “the same process with AI in it.” The answer was a different process where the human only did the parts that genuinely required human judgment, and AI did everything else.
This is the distinction the speech audience kept coming back to. The bolt-on instinct treats AI as a faster employee. The redesign instinct treats AI as a different category of capability that lets you ask different questions about the work. Mollick at Wharton has a useful framing for this. He distinguishes between “Centaur” workflows, where humans and AI divide tasks based on what each does best, and “Cyborg” workflows, where the two are tightly integrated. Both require deliberate design. Neither happens by accident.
Why almost nobody does this#
If the data is so clear, why do roughly 80% of companies still bolt on instead of redesigning?
Sunk cost is the first reason. You spent two years getting your current workflow right. Your team knows it. Your tools are configured for it. Your customers expect it. Throwing it out and starting over feels reckless, even when the math says the new version would pay for itself in a quarter.
The second reason is what Harvard’s research group calls “process debt”: the accumulation of fragmented workflows built up over years that nobody fully understands anymore. When you can’t see the process clearly, you can’t imagine an alternative to it. You can only patch what’s in front of you. Almost no 10-50 person company has its workflows properly documented. About 80% of knowledge work runs on institutional memory and individual habits. That’s not a problem until you try to redesign anything.
The third reason is that it’s easier to buy a tool than to change how people work. Process redesign isn’t a procurement decision, it’s an operational and cultural one. It means asking your team to question the way they’ve been doing things, identify steps that exist for no good reason, and accept that some of their work will be done by something else. That’s harder than approving a new SaaS subscription, even when the SaaS subscription will produce nothing.
And there’s a fourth reason that’s about to become urgent. AI is moving from copilots to autonomous agents: systems that take a goal and execute multi-step workflows on their own. You cannot bolt an autonomous agent onto a process designed for sequential human steps. The agent doesn’t know what to do with the human bits. Companies that don’t redesign now will face a much more painful and expensive redesign in 18 months, when their competitors are running on agentic workflows and they’re still trying to plug AI into processes from 2018.
Four questions before you build anything#
Here’s the method I use with clients. It takes 60 seconds, requires no consultants, and prevents most of the common mistakes.
One. What outcome does this process actually achieve? Not “what steps does it have.” What result does it produce, in concrete terms? For the recruitment agency, the outcome wasn’t “screening calls.” It was “a hiring manager confidently saying yes or no to a candidate based on a structured CV.” Naming the outcome strips away all the steps that exist for historical reasons rather than because they produce the outcome.
Two. What would this look like if no human touched it? This is a thought experiment, not a directive. You’re not going to remove all humans from your process. But imagining the fully automated version reveals which steps exist only because a human used to be the only option. The template-filling step at the agency existed only because the recruiter had no other way to turn messy notes into structured information. Once a different option exists, the step doesn’t have to.
Three. Where does human judgment actually add value? Not “where does a human currently work,” where does human judgment make the result genuinely better? At the agency, judgment mattered for one thing: deciding whether the candidate was real, adequate, and worth the hiring manager’s time. Not transcription. Not extraction. Not formatting. Just judgment.
Four. Design the new process around those answers. Human judgment where it matters. Everything else handled by something built for that kind of work.
The questions don’t produce a definitive new design. Different businesses will answer them differently, and the same business will answer them differently a year from now as tools improve and your team’s experience grows. The point isn’t the answer. The point is that you asked.
The conscious choice#
Here’s the nuance I want to land on, because I think it’s the part that gets lost when people talk about AI strategy.
Bolt-on isn’t always wrong. Sometimes you genuinely can’t redesign a process: regulated workflows where the steps are mandated by law, manufacturing lines designed around physical equipment that can’t be reconfigured cheaply, deep integrations with legacy systems where touching one piece breaks five others. In all of those cases, patching AI onto an existing step is a legitimate engineering decision. You take the marginal gain and you move on.
What’s not legitimate is patching by default because nobody asked the redesign question. Treating every AI implementation as “where do I plug this in?” instead of “what would this look like if I built it from scratch knowing AI exists?” The four-question method takes 60 seconds. The cost of skipping it is McKinsey’s 2.8x performance gap and the fact that 80% of AI projects fail to deliver meaningful returns.
There’s a useful distinction emerging in the AI tools space: AI-enhanced versus AI-native. Enhanced is when you take an existing product or process and add AI to it. The process gets a bit better. Native is when the process is designed around AI from the start, and the improvements are structural, not marginal. Most of what gets sold as “AI-powered” today is AI-enhanced. The companies pulling away from the pack, the 6% McKinsey calls high performers, are the ones building AI-native.
Both choices are valid. Sometimes enhanced is the right call. The question is whether you chose, or whether you defaulted.
This is part of the AILS Framework — 9 AI Leadership Skills for founders and leaders of growing companies. The full framework with research and exercises is publishing weekly at andrewbush.org/ails.
If this resonated, Skill 2: Deterministic vs. Non-Deterministic Thinking covers the decision model for choosing when AI should be used at all. For the broader operating model, The Startup Tech Team Playbook connects these choices to team design and execution.
The recruitment agency case is shared with the founder’s permission. Company details have been omitted.