The Eight Shifts Reshaping Enterprise AI
Twelve months ago, the dominant question among senior technology leaders was whether AI would deliver enterprise value. That question is now settled. What has replaced it is harder: how to operate, govern, and scale in an environment where capability moves faster than the institutions trying to absorb it, where competitive advantage erodes in months rather than years, and where the economic, architectural, and security assumptions of the past decade are quietly being rewritten. What follows are eight observations from the front lines of that transition. They build toward a single conclusion: the enterprises pulling ahead are not the ones with the best models. They are the ones learning to make consequential decisions inside ambiguity.
1. The Messy Middle Is the Operating Reality, Not a Phase to Wait Out
- "We are in a messy middle - between what was and what will be - and we don't know what 'will be' looks like yet."
- "You can't afford to do everything, you can't afford to do nothing, and you can't afford to make a mess."
- "AI is not the horse, it is not even the cart - it is just one of the wheels."
Enterprises are operating without a clear end-state, and the temptation is to wait for one. The cost of waiting is high. The opposite temptation, pursuing every promising thread, produces fragmentation, sprawl, and waste. The organizations making progress are doing so without resolving the ambiguity. They are building enough discipline to act with conviction while leaving optionality intact.
The shift is from transformation as a destination to transformation as a continuous condition. AI is not the central narrative of enterprise change. It is one accelerant among several, operating alongside cost programs, restructuring, geopolitical pressure, and workforce evolution. Treating it as the whole story risks misallocating attention. Treating it as a side project risks irrelevance.
Questions:
- Where are we deferring decisions because the future is unclear - and what is that costing us?
- What would discipline look like in an environment where the end-state cannot yet be defined?
2. Competitive Advantage Windows Are Compressing to Months
- "Our competitive advantage is lasting months - not years - because capabilities are becoming commoditized so quickly."
- "If it's just using AI on available data, the advantage disappears in 6-12 months."
- "The competitive landscape is so fluid that it's hard to commit deeply to any one partner."
The half-life of AI-driven differentiation is collapsing. Capabilities that were proprietary six months ago are now table stakes. Models that justified premium investment have been displaced by cheaper, more capable alternatives. This is forcing a fundamental rethink of how capital is allocated to AI initiatives. The case for investment can no longer rest on durable, multi-year returns from a single capability.
What remains defensible is harder to copy and harder to articulate. Proprietary data is part of the answer. So is deep integration into operational systems that competitors cannot replicate without parallel transformation. But the most durable advantage may be the organizational capacity to keep reinventing, a discipline that strengthens with use. Without it, every AI investment is a one-time bet against an increasingly fast-moving market.
Questions:
- Are we investing in capabilities that will be commoditized in 12 months, or in advantages that strengthen over time?
- What in our organization is actually hard to replicate - and how are we protecting it?
3. Process Transformation, Not Use Cases, Is Where the Real Value Sits
- "The real returns are coming from deep process transformation - $10M per process, not isolated use cases."
- "If we just layer AI on top of broken or local processes, we are repeating the same mistake - only at a higher cost."
- "A week-long RFP process can become a five-minute agent-orchestrated workflow."
Most enterprises are still operating in the use-case paradigm: identify a workflow, apply AI, measure productivity, repeat. The returns are real but capped. The breakthrough examples emerging share a different pattern. They begin with an end-to-end process, often analyzed first through traditional lean methods, and then redesigned around what AI now makes possible. The result is not a faster version of the old process. It is a fundamentally different process.
This shift requires giving up the comfort of small wins. It means engaging subject-matter experts directly, redesigning roles, and accepting that the work itself will change. It also requires moving past the temptation to optimize what should no longer exist. The organizations seeing transformational returns are not running more pilots. They are choosing fewer, deeper bets and committing to them seriously.
Questions:
- Where are we still chasing use cases when the real prize is a re-engineered process?
- Which of our processes would we redesign from scratch if AI were treated as a design input rather than an overlay?
4. Adoption Is the Binding Constraint - and It Is Behavioral, Not Technical
- "We had to cap the model's recommendations because they were too correct for people to accept."
- "Quality improved not just because of AI, but because people knew they were being monitored at scale."
- "We have teams with the same tools, but 10-20x differences in output depending on how they use them."
The most striking findings of the past year are not about model capability. They are about human behavior. Organizations are deliberately constraining model outputs to keep recommendations adoptable, foregoing a higher-quality answer in exchange for one that people will actually use. They are observing that making AI-driven monitoring visible changes behavior before any algorithmic decision is made. And they are confronting performance gaps between employees that have nothing to do with access or training, and everything to do with how individuals choose to engage with the tools.
This reframes the adoption challenge entirely. It is not a change management problem in the traditional sense of communication plans, training, and rollout cadence. It is closer to a question of organizational psychology and individual agency. AI value at scale depends less on the technology and more on whether the workforce develops what might be called AI agency, the capacity and willingness to use these tools to their potential. That capacity is highly variable, hard to measure, and not yet a managed dimension of talent. The gap between the most effective and least effective users inside the same organization, using the same tools, is becoming one of the most consequential and least understood variables in enterprise performance.
Questions:
- Are we measuring AI adoption by access, by usage, or by the agency individuals show with it?
- What is the cost of the gap between our most effective and least effective users - and is it widening?
5. Architecture Has Become a Strategic Choice - and a Geopolitical One
- "If you don't control the orchestration layer, you don't control your AI strategy."
- "70% of SaaS could be replaced by platforms developed internally."
- "Security constraints are forcing some of us to run models fully offline - even if they are not the best."
The architectural decisions being made today are setting long-term constraints that will be difficult to reverse. The default for most enterprises is now multi-model, driven not by ideology but by the recognition that no single provider will dominate and that switching costs must be kept low. This has elevated the orchestration layer to a position of strategic importance, comparable to what ERP represented in a previous era. Whoever controls that layer controls integration, governance, and cost discipline. A parallel architectural decision is emerging around how AI is operated, not just what AI is used. Enterprises are increasingly separating innovation environments, where teams can experiment broadly and quickly, from production environments, where governance, security, and cost discipline are tightly managed. This separation is becoming a defining feature of how large organizations absorb AI without losing control of it, and the boundary between the two is itself becoming an architectural choice with long-term consequences.
A deeper shift in the application stack is also being anticipated, though it has not yet arrived at scale. The growing conviction is that as AI matures into the primary interface layer, the role of underlying enterprise systems will be rethought from the ground up. The direction of travel points toward headless architectures, with core platforms reduced to systems of record while intelligence and interaction move to AI layers above them. This is not the norm today. It is where increasing conviction is pointing. Early estimates from leaders who have looked closely suggest that a significant share of the existing SaaS footprint could be rebuilt internally at materially lower cost, which exposes the license-based pricing models underlying much of the SaaS economy. As agents replace users, per-seat economics begin to break, and a shift toward consumption or transaction-based pricing becomes increasingly likely. Whether this represents a durable inversion of the build-versus-buy logic or a transitional phase before new categories emerge is unresolved.
At the same time, sovereignty, regulatory, and security pressures are forcing architectural divergence at the macro level. Highly regulated industries are running open-weight models on premise. Some are deliberately disconnecting AI workloads from the public internet. Geopolitical considerations are shaping decisions about which models are used, where data resides, and which partnerships are viable. The era of frictionless cloud-based AI procurement is closing. What comes next is more fragmented, more constrained, and more political.
Questions:
- Where in our stack are we accumulating lock-in that will be expensive to reverse?
- If AI does become the interface layer, which of our current applications still need to exist - and which become just systems of record?
6. The Economics Are Starting to Surface
- "Tokens are getting cheaper and cheaper, but transformation is becoming more expensive and more complex."
- "We are not seeing the true price of tokens yet - and when it shows up, it will change the economics."
- "The real risk is not vendor lock-in - it's the cost curve once token pricing becomes real."
Two cost dynamics are colliding. The unit cost of compute and inference continues to fall. But the total cost of transformation, integration, governance, change management, infrastructure refactoring, and ongoing model versioning, is rising sharply. "AI is cheap" is becoming a misleading frame. Inexpensive models do not produce inexpensive transformation.
A third dynamic sits underneath both of these: scale itself changes the economics. A modestly priced tool becomes a material enterprise decision once it is multiplied across tens of thousands of employees, and license-based pricing models that work cleanly at a thousand seats begin to strain at a hundred thousand. This is forcing a shift in how AI is procured and deployed, away from broad horizontal access and toward more deliberate decisions about which populations get which capabilities, under what governance, and at what cost. The strategy that works for a startup or a mid-market organization does not survive contact with enterprise scale.
Behind all of this sits a further dynamic that has not yet fully arrived. Current token pricing is widely understood to be subsidized. As major AI vendors face market pressure to demonstrate profitability, prices are expected to recalibrate. When that happens, business cases built on current economics will need to be rerun. CFO scrutiny is already increasing, and the era of writing blank checks for AI experimentation is closing. The organizations that build cost discipline now, around model selection, workload routing, and consumption monitoring, will be the ones still standing when pricing normalizes.
Questions:
- What would our AI investment case look like at 3x current token costs?
- Are we measuring the true total cost of an AI initiative, or only the parts visible to IT?
7. Cybersecurity Is About to Be Outpaced by Its Own Tools
- "The real risk is not the vulnerability - it's the backlog of patches and the inability to deploy them."
- "Resilience, not prevention, is becoming the primary design principle."
- "In classified and security-sensitive environments, the threat model is not individual hackers - it is nation states."
AI is changing both sides of the cybersecurity equation, and not symmetrically. New AI-driven vulnerability discovery capabilities are surfacing flaws across software ecosystems at unprecedented scale, including long-dormant issues that were previously invisible. The discovery problem is largely solved. The response problem is not. Patch cycles measured in months, legacy infrastructure that cannot be modified easily, and operational systems that require validation before any change all create a structural mismatch with the new pace of detection.
The strategic response is shifting from prevention to resilience. Network segmentation, rapid recovery, business continuity, and assume-breach design principles are becoming central. In sectors exposed to nation-state actors, the considerations extend further: model selection, code generation policies, and even executive travel patterns are being reviewed through a security lens. The organizations most exposed are not those with the worst security postures. They are those with legacy environments they cannot patch fast enough, and they are now visible to attackers in ways they were not before.
Questions:
- What is our true patch velocity - and how far is it from the velocity of detection?
- Are we designing primarily for prevention, or for recovery?
8. The Workforce Question Is Not Job Loss - It Is Joint Workforce Design
- "The hard question is not whether agents work, but whether the whole workforce is ready to use them."
- "AI is making people more productive - but also more stressed due to constant context switching."
- "The real challenge is the human toll - anxiety, change, and the ability to adapt."
The dominant public narrative about AI and work is still framed around displacement. Inside the enterprise, the more pressing question is different: how to design and manage a workforce where agents and people operate side by side, where individual roles are increasingly fluid, and where the management discipline required to run such a workforce has not yet been developed. Several leaders are now evaluating their managers on how effectively they manage agents in addition to people, an early signal that performance management, hiring, training, and incentive design are about to be rebuilt around a joint people-agent workforce. The implications cut across functions and have barely been addressed.
In parallel, the human cost of the transition is becoming clearer. Productivity gains are not always translating into reduced workload. More often they are translating into greater concurrency, faster context switching, and higher cognitive load. The people closest to the work are reporting that they are more productive and more stressed at the same time. The leadership challenge has moved beyond capability building. It is now about managing morale, identity, and resilience through a period of structural change with no defined end. This is the point at which the CIO and CHRO agendas are converging, and the organizations that recognize it first will set the model for the rest.
Questions:
- How are we managing a workforce in which agents are co-workers, not tools?
- Are we measuring the human cost of our AI gains - or only the productivity gains?
- Which of our governance structures still protect the enterprise - and which now slow transformation?
What This Adds Up To
The eight observations above describe an environment that has moved past the proof-of-concept phase but has not yet found stable ground. Capability is no longer the bottleneck. Neither is awareness. What is in short supply is the operational, architectural, and human discipline required to make AI work at enterprise scale, in an environment where every assumption is being rewritten faster than it can be documented.
The leaders pulling ahead are not those with the most ambitious roadmaps. They are the ones treating ambiguity as the operating reality rather than a temporary inconvenience, building organizational muscle for continuous reinvention. The next phase of enterprise AI will not be won by the most technologically advanced organizations. It will be won by the most adaptive ones.
© 2026 Executive Technology Board. All rights reserved. This document is the proprietary work product of the Executive Technology Board and is intended for the use of board members and authorized recipients. The perspectives reflected here are synthesized from member discussions conducted under the Chatham House Rule and are presented without individual or company attribution. No part of this document may be reproduced, distributed, quoted, or republished, in whole or in part, without the prior written consent of the Executive Technology Board.
Executive Technology Board (c)