McKinsey Claims 80% Companies Fail to Generate AI ROI.  They’re Wrong.

McKinsey Claims 80% Companies Fail to Generate AI ROI. They’re Wrong.

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

 

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

 

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

This AI Creativity Trap is Gutting Your Growth Capabilities

This AI Creativity Trap is Gutting Your Growth Capabilities

“We have to do more with less” has become an inescapable mantra, and goodness, are you trying.  You’ve slashed projects and budgets, “right-sized” teams, and tried any technology that promised efficiency and a free trial.  Now, all that’s left is to replace the people you still have with AI creativity tools.  Welcome to the era of the AI Innovation Team.

It sounds like a great idea.  Now, everyone can be an innovator with access to an LLM.  Heck, even innovation firms are “outsourcing” their traditional work to AI, promising the same radical results with less time and for far less money.

It sounds almost too good to be true.

Because it is too good to be true.

 

AI is eliminating the very brain processes that produce breakthrough innovations.

This isn’t hyperbole, and it’s not just one study.

MIT researchers split 54 people into three groups (ChatGPT users, search engine users, and no online/AI tools using ChatGPT) and asked them to write a series of essays.  Using EEG brain monitoring, they found that the brain connectivity in networks crucial for creativity and analogous thinking dropped by 55%.

Even worse? When people stopped using AI, their brains stayed stuck in this diminished state.

University of Arkansas researchers tested AI against 3,562 humans on a series of four challenges involving finding new uses for everyday objects, like a brick or paperclip.   While AI scored slightly higher on standard tests, when researchers introduced a new context, constraint, or modification to the object, AI’s performance “collapsed.” Humans stayed strong.

Why? AI relies on pattern matching and is unable to transfer its “creativity” to unexpected scenarios. Humans use analogical reasoning so are able to flex quickly and adapt.

University of Strasbourg researchers analyzed 15,000 studies of COVID-19 infections and found that teams that relied heavily on AI experts produced research that got fewer citations and less media attention. However, papers that drew from diverse knowledge sources across multiple fields became widely cited and influential.

The lesson? Breakthroughs require cross-domain thinking, which is precisely what diverse human teams provide, and, according to the MIT study, AI is unable to produce.

How to optimize for efficiency AND impact (and beat your competition)

While this seems like bad news if you’ve already cut your innovation team, the silver lining is that your competition is probably making the same mistake.

Now that you know better, you can do better, and that creates a massive opportunity.

Use AI for what it does well:

  • Data analysis and synthesis
  • Rapid testing and iteration to refine an advanced prototype
  • Process optimization

Use humans for what we do well:

  • Make meaningful connections across unrelated domains
  • Recognize when discoveries from one field apply to another
  • Generate the “aha moments” that redefine industries

 

Three Questions to Ask This Week
  1. Where did your most recent breakthroughs come from? How many came from connecting insights across different domains? If most of your innovations require analogical leaps, cutting creative teams could kill your pipeline.
  2. How are teams currently using AI tools? Are they using AI for data synthesis and rapid iteration? Good. Are they replacing human ideation entirely? Problem.
  3. How can you see it to believe it? Run a simple experiment: Give two teams an hour to solve a breakthrough challenge. Have one solve it with AI assistance and one without.  Which solution is more surprising and potentially breakthrough?

 

The Hidden Competitive Advantage

As AI commoditizes pattern recognition, human analogical thinking and creativity become a competitive advantage.

The companies that figure out the right balance will eat everyone else’s lunch.