Here's the question that stops most AI adoption efforts before they produce anything: where do you start? You have an entire organisation full of workflows, processes, and tasks. Some are candidates for AI agents. Most aren't. And you have no reliable way to tell which is which without investing weeks of detailed analysis into each one.
That's the trap. The detailed analysis is expensive. If you do it on the wrong workflows, you've burned weeks producing specifications for things that weren't worth building. If you skip it entirely and jump to building, you automate the wrong things and discover that later, painfully.
Kickoff is the AGENTIC Framework's answer to this. It's the stage that comes before the AGENT Pipeline. Before Assess, before specification, before scoring, before any commitment to build. Two passes, one day per function. The first pass is fast: list your tasks and your tech stack, and a filtering agent tells you where to look. The second pass goes deep on the ones that earned it. A scored shortlist of what's actually worth the pipeline.
Why most AI adoption starts in the wrong place
The default approach to AI adoption in most organisations goes something like this: leadership decides to "use AI," someone identifies a workflow that looks automatable, and a team starts building. The workflow was chosen because it's visible, or because someone in a meeting said "that should be automated," or because a vendor demo made it look easy.
Nobody spent a day talking to the people who actually do the work across the function. Nobody compared that workflow against twenty others that might have been better candidates. Nobody checked whether the obvious choice was obvious for good reasons or just because it was the first one someone mentioned.
And there's a subtler version of this problem: plenty of people know AI matters but have no idea where it applies to them. The urban design firm, the conservation field team, the architectural studio. They look at AI agent frameworks and think: this isn't for me. I don't know where to start. The problem isn't unwillingness. It's that the entry point is missing.
The biggest waste in AI adoption is doing deep analysis on the wrong workflow. Kickoff exists so you only do the deep work on the workflows that earned it.
I built Kickoff because I watched this happen. At a marine conservation non-profit, the leadership team had a list of "AI projects" before anyone had spoken to the people doing the work. When we actually ran the conversations, the highest-impact workflows weren't on the list. The things people spent the most time on, resented the most, and would have handed off tomorrow weren't the things management had identified. They were buried in the daily reality that nobody with a bird's-eye view could see.
Pass one: the task list sweep
Kickoff starts simpler than most people expect. No detailed workflow analysis. No deep-dive conversation. Just two things: what's your tech stack, and what do you actually do in a week? It's still a conversation, still recorded, but the goal is breadth, not depth.
The tech stack matters more than people realise. Whether an organisation runs on Google Workspace, Microsoft 365, Notion, Slack, a specific CRM, or some combination of all of these changes what's possible. Many of these tools already have AI capabilities built in that aren't being used. A Copilot workflow in Microsoft 365, an AI-populated column in Notion, a Slack automation. The filtering agent needs to know what's available before it can assess what's viable.
Then: list the tasks. Not the detailed process steps. Just the tasks. I update the CRM. I create a weekly report. I draft comms. I compile data into a summary. I respond to enquiries. Just the name and rough description of every recurring task in the function.
You don't need to understand AI to participate in the task list sweep. You just need to describe your week. The filtering agent does the rest.
A filtering agent would take the task list and the tech stack and run a capability filter. Which of these tasks involve patterns that current AI handles well? Which ones could be improved using tools the organisation already pays for? It doesn't need to understand the detailed workflow. It needs the task name, rough frequency, the tools involved, and whether the output is internal or external. That's enough to produce a ranked shortlist: these are the ones worth talking through in detail.
This first pass is the entry point that most frameworks are missing. Someone who runs an architectural firm and has never thought about AI adoption can sit down, list their week's tasks, name their tools, and get back a prioritised list of where to look. No AI literacy required. No jargon. The filtering agent translates between "here's what I do" and "here's where AI could help."
The task list sweep also produces something valuable even for the tasks that don't get flagged: a high-level inventory of what the function does. That inventory feeds the AGENTIC Vault. When capability changes later, Track can resurface tasks from the inventory that have become feasible. The data doesn't expire.
Pass two: the deep conversation
The filtering agent produces a shortlist. These are the tasks worth investigating. Now you go deep, but only on these ones.
Pick a function or role. Finance, operations, legal, field management: wherever the filtering agent flagged the most candidates. Then talk to the people who do the work. Not a structured interview. Not a form. A conversation about the specific tasks the filter identified.
The framing is simple: walk me through this task end to end. What triggers it, what tools do you use, where do you make judgment calls, what breaks, what takes the longest? Then let them talk. The nuance lives in the unstructured parts of the conversation: the side details, the workarounds, the things people do that they don't think of as "steps" but absolutely are.
Hit record and let people ramble. The nuance lives in the ramble: the workarounds, the eye-rolls, the "you're not gonna believe how this actually gets done."
Record the conversation. Voice recordings capture what structured interviews miss: tone, emphasis, emotional signals about which tasks people genuinely resent versus which they tolerate. Those signals matter. They tell you where adoption will be easy and where it will need work. If someone is showing you their screen, capture that too. Screen recordings reveal steps that have become so automatic the person forgets to mention them.
For each workflow, capture the broad strokes: what triggers it, how often it runs, roughly how long it takes, who's involved, how painful it is, where judgment calls live, what breaks, and whether it depends on one person's knowledge. You're painting with a wide brush. The fine detail comes later, at Assess, and only for the workflows that earn it.
Score what surfaces
The conversation recordings are transcribed and fed to an agent that extracts the workflows, restructures the insights, and builds a profile of each one: who owns it, how they feel about it, where the pain is, how ready they are for change. Rambling conversation in, structured data out. That data goes into the AGENT Prioritisation Matrix, where the assessment agent scores each workflow on current AI capability, complexity, risk indicators, and organisational readiness signals.
The output is a rough ranking. Not the full scoring that happens at Greenlight. These are preliminary scores that answer one question: which workflows are worth the full Assess treatment?
Test before you commit
For the top candidates, the assessment agent can run a quick capability test: try solving the workflow, or a key part of it, with a single well-crafted prompt. This takes minutes, not weeks.
The results fall into three buckets. Some tasks hit high accuracy with a single prompt and are strong pipeline candidates. Some can be handled entirely by a prompt-based solution and might not need the pipeline at all. Others break on the first real batch because of undocumented rules or edge cases, and those need the full Assess treatment.
Each result tells you something different. Sometimes the best outcome from a Kickoff scan is a well-crafted prompt deployed in a week, not a six-month pipeline project. Simple solutions are real outcomes.
What Kickoff feeds into
Here's the full flow from function scan to pipeline entry:
The scored shortlist is what enters Assess: the stage where you map the workflow in detail, surface tacit knowledge, fix what's broken, and produce the machine-readable specification. Kickoff deliberately avoids that level of detail. Its job is signal, not specification. "Is this worth investigating?" not "How does this work step by step?"
Kickoff is also not a one-time exercise. After the first batch of workflows goes through the pipeline, the assessment agent goes back through the Kickoff data and resurfaces the next candidates. As AI capability evolves, workflows that were "not yet" become "focus here." The task list inventory from pass one means there's always a backlog of possibilities waiting for the technology to catch up. New scans add more data. This intake loop keeps the pipeline fed continuously.
The Kickoff data also feeds two other streams in the AGENTIC Framework. When someone describes a workflow that touches regulatory filings or financial reporting, that's an early signal for the AI Governance Stream. When the conversation reveals enthusiasm or resistance, that's data for the AI Adoption Stream. Kickoff is where the relationship with the people who do the work begins.
What I've learned running Kickoff scans
Three things show up every time.
First: the workflows management identifies as AI candidates are rarely the best ones. They're the most visible, not the most impactful. The best candidates are usually buried in someone's daily grind, invisible to anyone who doesn't do the work. The task list sweep catches these because it doesn't ask "what should we automate?" It asks "what do you do?" Those are different questions with different answers.
Second: people will tell you everything if you let them talk. The structured interview format, where you ask specific questions and get specific answers, misses the richest data. The workarounds, the frustrations, the informal systems that actually run the organisation: these come out when people relax and describe their day. Record it. Let the AI do the structuring.
Third: the tech stack reveals more than people expect. Someone lists "I update the CRM weekly" and the filtering agent knows that CRM already has an AI feature that could automate half of that update. The conversation in pass two confirms it, and the result is a configuration change, not a custom build. Most organisations are paying for AI capabilities in their existing tools that they aren't using. The task list sweep, combined with the tech stack, surfaces these immediately.
Kickoff takes a day per function and produces a scored shortlist. That shortlist is the difference between an AI adoption programme that starts with the right workflows and one that discovers, months later, that it started with the wrong ones.