A Forbes survey published in January 2026 dropped a number that should keep every executive up at night: 56% of CEOs say they’ve seen no measurable return on their AI investments. Only 12% report concrete profits. The rest are stuck in a costly middle ground — spending, experimenting, hoping.
The technology isn’t the problem. AI works. The problem is how companies measure — or fail to measure — what it actually does for them.
The Classic Mistake: Confusing Activity with Outcome
Most companies investing in AI track the wrong things. They count projects launched, data volumes processed, models trained. These are activity metrics, not outcome metrics. It’s like measuring conveyor belt speed without checking how many good parts come off the line.
In my work with service businesses and manufacturing operations, I see this pattern constantly. There’s a fundamental distinction between local efficiency and system efficiency that most AI implementations ignore entirely. You can deploy an AI model that processes requests ten times faster than a human. But if that model sits in a workflow where the real bottleneck is somewhere else — manual approval, data integration, a management decision — the net gain for the business approaches zero.
The AI vendor shows you impressive demos. Your team reports faster processing times. The dashboard looks great. But revenue hasn’t moved. Sound familiar?
Queuing Theory Exposes the Real Problem
Queuing theory gives us the sharpest lens for understanding why so many AI implementations fail to produce visible returns. Think of any business process as a queuing system: demands arrive (customers, orders, tickets), wait in line, get processed by one or more servers (people, systems, machines), and exit as deliverables.
When a company deploys AI at a specific point in that system, it’s essentially increasing service capacity at that node. But Little’s Law — one of the foundational theorems of queuing theory — tells us that the average number of items in a system equals the arrival rate multiplied by the average time in system. If you speed up one node but never measure total system time, you have no idea whether anything actually improved.
This is exactly what happens in most corporate AI rollouts. A company speeds up customer service with chatbots but doesn’t measure total resolution time. It automates lead scoring with machine learning but doesn’t track final conversion rate. It deploys predictive maintenance analytics but never calculates the actual reduction in downtime.
I wrote about this dynamic extensively in my piece on scaling AI agents as a queuing problem — the math is unforgiving. Speed at one node without system-level thinking is just expensive noise.
What the Profitable 12% Do Differently
The companies reporting positive AI ROI share three characteristics, according to the Forbes analysis supported by data from IBM and PwC:
They embed AI in the process, not on top of it. Profitable companies didn’t bolt AI onto existing workflows. They redesigned the entire process around what AI can do. In queuing terms, they didn’t just increase service rate — they changed the queue discipline and system topology.
They measure throughput, not volume. Instead of tracking how much data AI processed, they measured how many business outcomes were generated per unit of time. Real throughput: revenue per operating hour, tickets resolved per complete cycle, leads converted per period.
They benchmark before they implement. Before putting AI anywhere, they mapped the current state with precision. Average processing time, demand arrival rate, resource utilization, average time in queue. Without that baseline, any measurement of “improvement” is fiction.
Four Steps to Measure AI ROI for Real
For anyone who wants to move from the 56% to the 12%, the path runs through four concrete steps:
Map the queuing system of your target process. Where does demand enter? Where does it wait? Where is it processed? Where does it exit? What’s the total customer time in system? Where’s the real bottleneck? Is AI attacking the bottleneck or a secondary point?
Establish system metrics, not component metrics. AI ROI isn’t model speed. It’s the change in total system performance. Did average resolution time drop? Did throughput increase? Was the freed human capacity redirected to higher-value work?
Calculate total cost of ownership. Include not just the AI license or development cost, but team training, process change management, ongoing model maintenance, data costs. Most companies underestimate these by 40-60%, according to Constellation Research data.
Measure in short cycles with feedback. Don’t wait 12 months to evaluate. Set checkpoints at 30, 60, and 90 days. If there’s no signal of system throughput improvement in the first 30 days, something is wrong with the implementation — not the technology.
The AI Productivity Paradox
We’re living through a paradox similar to what Robert Solow identified in the 1980s with computers: “You can see the computer age everywhere but in the productivity statistics.” In 2026, we see AI everywhere except on company balance sheets.
The difference is that this time we have more sophisticated analytical tools to measure impact. Queuing theory, combined with Lean Six Sigma, provides a robust framework for isolating where AI generates real value and where it’s just technological theater.
The question every CEO should be asking isn’t “are we using AI?” — it’s “in which queue in our system has AI reduced wait time, and how much is that worth in revenue?”
The answer to that question separates the 12% who profit from the 56% who just spend.
JJ Andrade is a Business Performance Engineer, author of the “Combining Lean Six Sigma and Queuing Theory” series, and founder of JJ Andrade LLC. He specializes in performance engineering and applied queuing theory for businesses navigating the AI transition.