Back to IdeasStrategy

The Absorption Crisis

Enterprise AI investment has never been higher. Enterprise AI competence has never lagged further behind.

11 min read

Executive Summary

Over the past week, the AI industry presented two contradictory sets of evidence. On the investment side: Meta raised its 2026 capex forecast to $145 billion, Citigroup lifted its AI market forecast to over $4 trillion, and both Amazon and Alphabet beat cloud revenue estimates on AI demand. On the adoption side: a study of Australian office workers found 70% use AI but almost none can use it effectively, African board leaders reported failing to achieve measurable AI returns, and major insurers began excluding AI liability from standard policies. The bottleneck is no longer compute, capital, or model capability. It is organizational absorption: the capacity of institutions to convert AI deployment into measurable outcomes.


01

The Numbers Say Growth. The Ground Says Confusion.

Record Capital, Record Uncertainty

The scale of AI infrastructure investment in Q1 2026 is difficult to overstate. Meta bumped its full-year capital expenditure to as much as $145 billion, a figure that caused investor recoil even as the company reported strong revenue growth. Alphabet poured $40 billion into Anthropic. Amazon and Google Cloud both beat quarterly revenue estimates on AI demand. Georgia committed $16 billion to AI data centers. The capital is flowing at a rate that makes the early cloud buildout look like a rounding error.

And yet the question that followed every earnings call was the same: can it sustain the costs? Meta's stock slid on its capex announcement. An American Affairs Journal analysis examined the structural sustainability of LLM economics. The market is doing something unusual: pouring record capital into a sector while simultaneously questioning whether the demand side can absorb it.

The Deployment-Fluency Gap

This skepticism is grounded in real evidence. Microsoft announced that Copilot surpassed 20 million enterprise users, with engagement rivaling Outlook. That sounds like a success metric. But Outlook is mandatory workplace software. People open it because they have to. When a productivity AI tool's engagement matches a required email client, the question is whether workers are deriving value or simply clicking through prompts.

The Australian study made the problem concrete. Seventy percent of office workers reported using AI tools. Almost none demonstrated effective fluency. They could interact with a chatbot, but they could not structure a prompt for a complex analytical task, evaluate whether an AI output was reliable, or integrate AI into a multi-step workflow. The gap between "using AI" and "being competent with AI" is wide, and it is largely invisible to the dashboards tracking adoption.

The same pattern emerged in a TEXEM training program for African executives: board leaders are budgeting for AI but failing to achieve measurable results. The failure is not in the technology. The organizations purchased tools, allocated budget, and approved pilots. The failure is in the institutional capacity to turn those investments into operational change.


02

Contradictory Signals from the Top

Two CEOs, Two Opposite Forecasts

The clearest sign that the industry lacks a shared model for how AI changes work is the contradictory guidance from leadership. Within the same week, Snap CEO Evan Spiegel predicted that companies would pull resources away from software engineering in favor of AI tool adoption. The same week, Amazon's cloud chief said AI will not replace engineering jobs and announced plans to hire 11,000 engineers.

These are not nuanced disagreements about the pace of change. They describe fundamentally incompatible visions of the near-term organizational future. If Spiegel is right, enterprises should be restructuring teams around AI tooling and reducing engineering headcount. If Amazon is right, they should be investing in engineering talent that knows how to build with AI, not in place of engineers.

Both positions have data supporting them. A Trinity College and Microsoft study showed AI generating 5,000 hours of freed-up time in large firms. That's real productivity. But 5,000 hours across a large enterprise is a rounding error in total labor capacity. It is the kind of gain that gets absorbed into existing work patterns rather than enabling structural change. The productivity is real but incremental. The investment is anything but.

The Talent Paradox

Meanwhile, the AI talent war is reshaping organizational power structures. Senior executives are leaving established software firms for AI-native companies. And the most revealing data point: Big Tech companies are paying up to $1 million for communications hires who will never write a line of code. These roles exist to manage AI messaging and stakeholder relations, not to build AI systems. When the highest-paid new positions at AI companies are communications roles, the bottleneck is organizational narrative, not technical capability.

The talent paradox reveals the absorption crisis in its purest form. Companies have the models. They have the infrastructure. They have the budget. What they lack is the organizational tissue to connect those inputs to business outcomes: the training, the process redesign, the middle-management fluency, the measurement systems that distinguish AI usage from AI value.


03

The SaaS Reinvention as Evidence

From Dashboards to Agents

The absorption crisis is forcing a parallel transformation in the software industry itself. Established SaaS companies are reinventing their products because passive insights no longer justify enterprise spend. The old model, where software surfaces data and humans act on it, breaks when organizations cannot build the competence to act on AI-generated insights reliably.

Salesforce began separately reporting AI revenue through Agentforce Apps and Data 360 categories. This disclosure change signals that AI is shifting from a feature of existing products to a separately monetizable layer. Bloomberg built AskB, an AI agent for financial analysis, and publicly documented the organizational lessons from building it. The pattern is consistent: vendors are attempting to embed AI competence into the product itself because they cannot rely on customers having it internally.

  • The old SaaS model: Software collects data, generates dashboards, and waits for humans to interpret and act. Value depends on the human's ability to read the data correctly.
  • The new SaaS model: Software analyzes data, recommends actions, and in some cases executes them autonomously. Value depends on the AI's ability to act reliably and the human's ability to supervise.
  • The absorption problem: Neither model works if the organization cannot define what "reliable" means in its specific context. Vendors are shipping agentic features, but the receiving organizations lack the evaluation frameworks to know whether those agents are performing well.

The Risk Layer

The absorption crisis has a financial shadow. Major insurers are now systematically excluding AI-related damages from standard policies, prompting the emergence of specialized AI liability products. This is the insurance industry's way of saying: we cannot price the risk because organizations deploying AI cannot demonstrate reliable governance over it.

At the same time, regulators are increasing enforcement on AI-washing claims, creating personal liability for board directors who overstate their organization's AI capabilities. The legal and insurance systems are beginning to penalize the gap between stated AI strategy and actual AI competence. Boards that announce AI transformation initiatives without building the institutional capacity to execute them face both regulatory and financial exposure.


04

What This Means for Builders

The absorption crisis reframes the competitive landscape. For the past two years, AI advantage was measured in model access, GPU allocation, and inference cost. Those inputs are rapidly commoditizing. IBM released Granite 4.1. An open-weights Chinese model beat Claude, GPT-5.5, and Gemini in competitive programming. The model layer is converging. The organizations that win from here are the ones that can absorb AI into operations faster and more reliably than competitors.

Three investments that matter more than your next model upgrade.

1

AI Fluency at Every Level

Stop treating AI training as an IT rollout. The 70% adoption-with-zero-fluency pattern means your people are interacting with AI systems they cannot evaluate. Build structured competency programs that teach prompt design, output evaluation, and failure mode recognition. Not for engineers. For every role that touches AI output.

2

Outcome Measurement, Not Usage Metrics

Twenty million Copilot users is an adoption metric, not a value metric. Build measurement systems that track what AI deployment changed: decision quality, cycle time reduction, error rates, revenue impact. If you cannot measure the outcome, you cannot justify the spend, and you will not survive the CFO's next budget review.

3

Governance Before Autonomy

Insurers are excluding AI liability. Regulators are enforcing AI-washing claims. If your organization deploys agentic systems without evaluation frameworks, you are creating uninsurable risk. Define what "reliable" means for your specific use cases before expanding autonomous AI scope. The governance infrastructure must precede the capability deployment.

The absorption crisis will resolve. Organizations that invest in institutional competence now will compound their advantage as model capabilities continue to improve. Those that keep buying tools without building the capacity to use them will find themselves with expensive infrastructure and nothing to show for it. The gap between AI deployment and AI value is where enterprise differentiation lives for the next 18 months.

Closing the gap between AI investment and AI outcomes?

We help enterprises build the organizational infrastructure that turns AI deployment into measurable business results.

Schedule a Consultation