A framework for AI adoption

The operating system that tells organisations what agents to build and when

A governed, structured, self-improving system for figuring out where AI agents belong across your operations — and making that real. Open, evolving, and freely available.

Developed by Madeleine Pierce

AI adoption without a system
is just a pile of experiments

Every organisation hits the same wall. Not "should we use AI?" — but "how do we actually do this properly?"

Disconnected pilots

Scattered experiments that don't connect, don't compound, and don't survive the person who built them.

Governance as afterthought

Boundaries bolted on after something goes wrong, instead of being built in from day one.

Silent degradation

Deployed workflows that quietly degrade because nobody is watching, measuring, or feeding corrections back.

Capability blindness

Workflows that weren't viable six months ago are ready now — and nobody knows. The frontier moves faster than anyone can respond.

Four parts. One operating model.

The pipeline does the deep work. The streams run alongside it — not after. At the centre, a living dashboard managed by agents.

Framework architecture
Kickoff
One-day exploratory per function — fast scans, broad strokes
AGENT Prioritisation Matrix
Living dashboard — continuously rescored — managed by agents
AAssess
GGreenlight
EEngineer
NNurture
TTrack
Map — Score — Build — Monitor — Resurface
AI Governance
Boundaries & accountability
AI Adoption
Change management & trust
Both streams run alongside the pipeline from day one

Five stages. Each one earns the next.

Go deep only where the dashboard says it's earned. Every workflow that passes through makes the next one faster.

A

Assess

Assess the workflow as it actually runs — not how it's documented, how it's done. Fix it before you formalise it. The output is a machine-readable specification with defined success criteria, governance requirements, and the messy human context that makes it real.

G

Greenlight

Greenlight what to build next. The prioritisation matrix scores each workflow in a structured, machine-readable way — designed so agents can access, compare, and resurface candidates automatically. Then the collaboration model is set: which steps are AI-run, which stay human-led. Agents recommend. Humans greenlight.

E

Engineer

Engineer the solution — build it, prove it, ship it. Every workflow passes through a parallel-run phase where the agent shadows the human before going live. Trust is engineered through evidence, not promises.

N

Nurture

Nurture what's live. Monitor workflows in production — every human override is a data point, every correction is an instruction. The system learns and the specification evolves. This is how you nurture accuracy over time.

T

Track

Track the frontier. New capabilities emerge constantly — Track watches for them and resurfaces workflows that weren't ready before. It also owns the decommission path. Without Track, the pipeline runs once and stops. With it, it compounds.

A conversation becomes a system

No large tooling migration. No advanced AI maturity required. Start with a recorded conversation, end with a compounding pipeline.

Recorded conversation Agent extraction Scored dashboard Specification Collaboration model Sandbox Parallel-run Live monitoring Override capture Spec revision Vault Resurfacing

Agents handle the orchestration at each handoff. That's what makes this executable by a small team.

An operating model, not a project

The framework is deliberately not a one-time engagement. It's designed to compound.

Not a one-time project
An operating model for ongoing AI adoption that compounds over time. Every workflow shipped makes the next one faster.
Not about replacing people
A collaboration design system. Humans decide where the judgement calls live. The framework makes that decision structured.
Not "document everything first"
Start with a conversation. Go deep only where the dashboard says it's earned. Most workflows surface in a single exploratory session.
Not tool-specific
Nothing creates lock-in. The framework works regardless of which AI tools, models, or platforms you use today or tomorrow.

Built in the open. Refined through application.

This isn't a finished product behind a paywall. It's a working framework — openly published, actively developing, and being tested across real organisations right now.

It's built from first principles, drawing on experience across digital transformation, product management, design thinking, and hands-on workflow mapping. It's currently being applied at two very different organisations, and they're hitting the same wall: not "should we use AI?" but "how do we actually do this properly?"

The framework will change. Resolution will sharpen as it meets more reality. Some parts are battle-tested, others are well-reasoned but unproven. That's by design — a framework that claims to be finished is a framework that stopped listening.

You're welcome to use it, adapt it, challenge it, and feed back what you find. That's how it gets better.

Want the full detail?

Every pipeline stage, both streams, templates, worked examples, and the thinking behind it. Openly shared and continuously updated.

Read the full framework