Here's the statistic that should shape every AI deployment: 70 to 80 percent of digital transformation projects fail to close the gap between technical capability and organisational adoption. The technology works. People don't use it, or they use it wrong, or they use it in ways that undermine the intended benefit. McKinsey and BCG studies from 2020 to 2024 consistently show this pattern. The gap is not new. It's not inevitable. But it's the default outcome when adoption is treated as an afterthought.
Most AI adoption programmes spend 90 percent of their effort on the technology and 10 percent on the people. Then they're surprised when the people don't come along. The training happens too late. The communication is vague. The team finds out their workflow is changing when the new system appears. Resistance builds, and the response is to push harder: more training, more enthusiasm, louder messaging. The resistance wasn't the problem. The approach was.
The AI Adoption Stream in the AGENTIC Framework runs alongside the AGENT Pipeline from the first conversation. It starts at Kickoff, when you begin talking to people about their workflows, and it doesn't stop until the new way of working is the normal way of working. It treats resistance as data, enthusiasm as risk, and trust as something you build with evidence, not assertions.
Resistance is data, not friction
When people resist a new system, the instinct is to push harder. Better training. More comms. Louder enthusiasm. But resistance usually contains information. The person who resists most is often the person who cares most about getting the work right.
Resistance signals five things. Communication gaps: people don't understand what's changing or why. Trust gaps: they've been burned before or don't believe the system will actually work. Design gaps: the system doesn't match how work actually happens. Authority gaps: changes were made for people, not with them. Capability gaps: people don't have the skills or confidence to work alongside the agent.
The person who resists most is often the person who cares most about getting the work right. Listen to resistance. It points to where the work is incomplete.
Each signal points to a different response. Communication gaps need clearer messaging. Trust gaps need evidence, which is exactly what the parallel-run at Engineer produces. Design gaps need specification revision, which feeds back through Assess. Authority gaps need involvement earlier in the process. Capability gaps need training and time. Treating all resistance as the same problem, and pushing harder as the universal solution, misses the signal entirely.
Enthusiasm is risk
The adoption conversation typically focuses on getting people to use the technology. But people who love the technology create a different problem. They use it for everything, including things it wasn't built for. They route around purpose-built tools because a general-purpose one feels faster. They bypass governance checkpoints because they're confident in the result. They produce external-facing outputs without the right checks.
At one organisation, a team member started routing work to an AI tool directly instead of feeding it through the designed system. The reason: they wanted faster feedback. The workaround made sense from their perspective but bypassed the quality controls the system was built to enforce. The fix wasn't to scold them. It was to improve the system's feedback visibility. The enthusiasm was a signal: the system wasn't meeting people's needs, and someone who cared about efficiency found a shortcut.
Uncontrolled adoption is a different category of risk to resistance, and it requires different responses. Design systems that can't break even when people don't follow the process. Put human checkpoints on outputs that matter. Build the guardrail into the tool, not around the person.
How the AI Adoption Stream connects to the AGENT Pipeline
The AI Adoption Stream touches every stage, not as a checklist, but as continuous intelligence about how people are experiencing the change.
At Kickoff, it starts with the relationship. The team's willingness to engage, their enthusiasm or scepticism, their readiness for change. All of this surfaces in the conversation. A team that's excited gets a different deployment strategy than a team that's cautious.
At Greenlight, organisational readiness is one of the scoring dimensions on the AGENT Prioritisation Matrix. A team that just absorbed a major change scores lower on readiness than a team that's been stable. The AI Adoption Stream feeds this data into scoring so that sequencing respects what people can actually absorb.
A team that just absorbed one workflow change may not be ready for another. Organisational readiness isn't a one-time assessment. It shifts with every deployment.
At Engineer, the parallel-run is the trust-building mechanism. The agent shadows the human. Outputs are compared side by side. Where they match, confidence grows. Where they diverge, the team investigates together. For low-risk steps, confidence might build quickly. For critical-risk steps, the bar is higher. The parallel-run continues until the Workflow Owner is confident at the appropriate severity level, not for a fixed duration, but driven by evidence. This is what transforms the conversation from "do we trust this?" to "look at what it does."
At Nurture, adoption data feeds back continuously. Tool-substitution detection surfaces when people are producing outputs outside the purpose-built system. If a significant portion of routine approval workflows are being done in general-purpose tools instead of the dedicated system, that's not a failure. It's an adoption signal. The system isn't meeting people's needs, or trust hasn't been built, or the workaround is easier. The AI Adoption Stream investigates.
Here's the full adoption cycle:
What I've learned about the last mile
Two things stand out.
First: the parallel-run solves more adoption problems than any training programme. When people see data, side by side, showing the agent matched their output on 11 of 12 steps, and the one discrepancy was a configuration issue that got fixed, the conversation changes. You stop arguing about whether the technology works and start discussing how to use it well. The parallel-run at Engineer is positioned as a technical validation. It is. But its biggest impact is on trust.
Second: the people closest to the work are the best source of intelligence about what's actually happening. The team member who catches a recurring data error 23 times in a month isn't just correcting the agent. They're teaching the system a rule they apply instinctively. The team member who routes work directly to the agent instead of through the designed system isn't misbehaving. They're showing you where the system is too slow. Every override, every workaround, every correction is a signal. The AI Adoption Stream exists to capture those signals and turn them into improvements.
The 20 to 30 percent gap between technical capability and adoption is not inevitable. It closes when you treat change management as a first-class concern, not a bolt-on after the build.
The AI Adoption Stream is the part of the AGENTIC Framework that ensures the technology actually lands. The AGENT Pipeline builds the right things. The AI Governance Stream ensures they're built safely. The AI Adoption Stream ensures people actually use them, the way they were designed to be used, with the support and trust to make it stick.