How to Run an AI Opportunity Assessment That Works

Green Fern

A use case spreadsheet is not an assessment.

If you have not read why most AI opportunity assessments fail, start there. This piece is the prescription. That one is the diagnosis.

Start from the work, build up to the tools

A good assessment begins by mapping the workflows. Not by surveying the vendor landscape. Not by running a use case brainstorm with leadership. By having structured conversations across a meaningful sample of employees.

Static surveys do not work for this. They are too flat. They force people into categories before you know whether the categories are right. And they miss the sentence where the real insight lives.

Someone says, "I spend too much time preparing the weekly report."

A survey records: reporting pain.

A proper follow-up asks: what report, for whom, from which systems, how often, what part takes longest, who checks it, what happens if it is wrong, what judgment do you add, which part would you happily never do manually again?

That is where the opportunity starts to show itself.

The output should look less like a list of AI ideas and more like a map of operational reality. What tasks repeat. Which tools are involved. Where handoffs happen. Where work waits for approval. Where people copy data between systems. Where they are already using AI. Where they tried AI and stopped. Where the bottleneck is actually a policy problem, not a technology problem.

Then, and only then, match the work to AI capability. Start from the work. Build up to the tools.

Look at the workflow before you classify it

When I look at a workflow, I want to know whether the inputs are structured or messy. Whether quality can be judged clearly. The cost of a mistake. Whether the task needs human accountability. Whether the process is stable enough for automation, or whether AI would just accelerate a bad process.

That last one is painful.

A lot of teams want AI to fix workflows nobody has had the courage to redesign. AI will not fix unclear ownership. It will not resolve political approval chains. It will not make bad CRM data trustworthy. It will not create agreement on what "good" means.

It may make all of that faster. Faster confusion is not transformation.

A useful assessment separates work into different buckets. Some work can be automated now. Some should be augmented with a human in the loop. Some needs process redesign first. Some needs training before tools. Some should stay human for now because trust, judgment, or accountability matters too much.

That classification is what should feed the AI adoption roadmap.

The speed problem

The old timeline is broken. Companies spend months interviewing stakeholders, running workshops, cleaning survey data, writing findings, preparing the readout. By the time the report lands, the sponsor is tired, the tools have changed, and employees have already found their own unofficial ways to use AI.

The company thinks it is preparing. The organization has already moved.

Companies need more context than before, but they need it faster than their normal transformation process allows. A shallow assessment is fast and usually useless. A deep assessment is useful and usually late.

The answer is not a two-hour executive workshop. Those are fine for alignment. They are terrible for ground truth. The loudest people speak. The most available people attend. The real workflow expert is often not in the room.

The better answer is to make discovery itself faster. Interview more people asynchronously. Ask adaptive follow-ups. Structure the answers as you go. Look for patterns by role, team, workflow, and bottleneck. Give leaders a first usable readout in days, not months.

This is why the first layer of any AI program should be a context layer. Before reports. Before training. Before tooling decisions. Before automation roadmaps.

You need to know what people actually do.

Once that context exists, action becomes much easier. Executive reports can reflect reality. Team-level training can map to actual workflows. Automation gets prioritized by real bottlenecks, not imagination. Decisions about where agents make sense and where human judgment still protects the business land in the right place.

Without that context, every next step is weaker.

Where to start

The next AI opportunity assessment you run should start with one constraint: no use case list until you have mapped the work. Start with 25 people. Interview them properly. Capture tasks, tools, handoffs, bottlenecks, objectives, repetitive work, exceptions, judgment calls.

Then compare that reality to what AI can and cannot do today.

If you start with the AI landscape, you will probably get a better-looking report. If you start with the work, you have a much better chance of changing something real.