Why Most AI Opportunity Assessments Fail

Green Flower

The worst AI workshop I ever saw produced 47 use cases.

The deck looked serious. Sales had ideas. HR had ideas. Finance had ideas. Legal had ideas. Three weeks later, nobody could answer the only question that mattered: which team should change how it works first.

That is the problem with most AI opportunity assessments. They create inventory. They do not create operational truth.

The standard approach, and why it produces shelf reports

Most companies start their assessment from the AI landscape. Someone collects examples — copilots, contract review, meeting summaries, sales email generation, knowledge search, invoice processing, recruiting screeners. Departments are asked to suggest opportunities. The output is a spreadsheet with columns for business value, feasibility, risk, ROI, owner, timeline. If the team is mature, there is a scoring model. If very mature, a heatmap.

It feels like progress.

But these reports die because they start from the wrong place. They begin with what AI can do and work backward into the organization. That is backwards.

The question is not "where could we apply AI?" AI can be applied almost anywhere now. The better question is: what is actually happening inside our workflows, and where would AI change something that matters?

The first time I ran one of these, I made the same mistake. I led with the tools. I spent two weeks building a taxonomy of GPT use cases by department. Categories, examples, maturity levels, vendor notes. I was proud of it.

Then it met reality.

Sales did not need "AI-generated follow-up emails." They had a broken handoff between discovery notes, CRM fields, proposal drafts, and approval. Legal did not need "contract review AI." They needed to separate routine clauses from judgment-heavy exceptions, and figure out why intake requests were so poor before contracts even reached the team.

A generic use case is not an opportunity. It is a hypothesis. Sometimes useful. Often lazy.

The traditional AI readiness assessment falls short for the same reason. It asks whether the company has data, governance, security, sponsorship, employee awareness. Those things matter. But they do not tell you which workflow should change first. A company can be "ready" on paper and still have no idea where AI will create value.

The individual-level problem

If your assessment does not include employee-level workflow evidence, it should not be called an assessment.

Most AI programs are planned at the department level. Marketing needs AI. Sales needs AI. Finance needs AI. That is convenient because departments map to budgets and leaders. Work does not happen that cleanly.

Take finance. From far away, the opportunities look obvious: reporting, forecasting, invoice processing, reconciliation. Inside the team, the picture changes. One person spends hours cleaning vendor data because two systems disagree. Another writes commentary for leadership and spends more time explaining numbers than producing them. Another owns the spreadsheet nobody wants to touch because she understands all the edge cases. Someone else is already using ChatGPT quietly to rewrite explanations before sending them to the CFO.

Same department. Four different AI situations. One workflow ready for automation. One needs augmentation. One needs process cleanup first. One needs governance because shadow usage is already happening.

Compress all of that into "finance AI opportunity" and you lose the signal.

This is why I get skeptical when companies build an adoption roadmap from leadership interviews alone. Leadership sees priorities. Employees see friction. You need both. Skip the people doing the work and the roadmap becomes a polished guess.

Distance makes work look simpler than it is. Customer support becomes "answering tickets" until you get close and realize the hard part is knowing which policy applies, whether the customer is high-risk, what was promised last quarter. Recruiting becomes "screening CVs" until you see the vague hiring criteria, inconsistent managers, and judgment disguised as admin.

AI changes tasks before it changes org charts. It changes how people search, draft, compare, decide, explain, summarize, check, hand off, escalate.

The assessment has to get close enough to see those verbs. Not just roles. Not just departments. Verbs.