Competing in the World of AI
Executive Technology Board — Roundtable | Veldhoven, April 29, 2026
AI is colliding with silicon at unprecedented scale, and the pace of change today is the slowest it will ever be. This session brings the Executive Technology Board into the heart of the semiconductor ecosystem — hosted by a company whose technology underpins the world's compute capacity, and whose own transformation is one of the most instructive case studies available to this peer group.
Digital transformation is no longer a technology program. It is how large enterprises compete, allocate capital, and operate to deliver outcomes. The conversation is maturing: the question is no longer whether to invest in AI, but whether the investment thesis is sound, the architecture is resilient, and the sequencing holds up under execution pressure. Technology sovereignty, supply chain concentration, and regulatory posture are not abstract policy questions — they are live strategic constraints shaping what can be built, how fast, and on whose infrastructure.
Two outcomes are the focus of the day:
- Peer intelligence — repeatable patterns from across the group on what is working, what sequencing is holding up, and what assumptions are being revised under execution pressure.
- Inside view — a candid look at how ASML is re-imagining its own business and operating system in the age of AI, in one of the world's most demanding engineering environments.
The goal is not consensus — it is collective intelligence.
Pre-Work Reflection
This brief is designed to help members arrive with sharp, specific thinking — and to make the peer exchange as substantive as possible. The quality of the conversation is proportional to the candor in the room. Please think ahead:
- An AI investment that has generated real, measurable business value — and specifically how that is known.
- An AI investment that underperformed or stalled — and the honest root cause.
- One strategic or architectural decision actively being wrestled with right now.
- An honest answer to: "What is the most dangerous assumption currently embedded in the AI strategy?"
ASML Inside View
What does it look like to run AI at the physical limits of engineering — and what does it signal about where enterprise AI is heading?
The ASML session is not a customer briefing. It is a peer conversation in the context of a member's enterprise — one that operates at the intersection of precision hardware, global supply chains, safety-critical software, and industrial-scale AI. Three questions worth carrying into the tour:
- Where does digital transformation look materially different when the product is physical, not digital?
- What does AI governance look like when the cost of failure is measured in machines worth hundreds of millions?
- What does operating at the center of global compute infrastructure reveal about concentration risk that most enterprises cannot see from the outside?
Session 1 | Transformation Roadmaps & Capital Allocation
The core question: Is the current AI investment thesis sound — and does the sequencing hold up under execution pressure?
Discussion Prompts
- Where is AI investment concentrating over the next 12–18 months — and what is being explicitly deprioritized, and why?
- What is the sequencing logic: platform foundations first, or use-case value first — and which is actually winning in practice?
- What has changed in the investment thesis over the past 12 months — and what forced that revision?
Tension 1 | Investment Sequencing: Foundations First vs. Value First
Point: Sustainable AI value requires platform foundations, data infrastructure, and controls to be right before scaling use cases. Skipping this creates compounding technical and risk debt that becomes increasingly expensive to unwind.
Counterpoint: Use-case value must be demonstrated early and continuously to sustain funding and organizational belief. Foundations built in isolation from delivery become shelfware.
What's at stake: investment continuity, technical debt accumulation, organizational credibility.
Tension 2 | Ambition Level: Incremental Productivity vs. Business Model Transformation
Point: Capturing near-term productivity gains is the responsible path — it funds further investment, demonstrates ROI, and builds internal capability incrementally without outrunning the organization.
Counterpoint: Productivity optimization leaves the business model unchanged. The enterprises that lead will use AI to transform what they offer and how they compete — not just how efficiently they operate. The window to make that move is narrowing.
What's at stake: competitive positioning, investment ambition, the pace of change the organization is willing to absorb.
Session 2 | Tech Sovereignty & Concentration Risk
The core question: What does a resilient AI strategy look like when the infrastructure it depends on is geopolitically contested — and when regulatory and supply chain constraints are not hypothetical?
Discussion Prompts
- Where does technology concentration risk feel most material — compute, models, platforms, or data infrastructure?
- How is the regulatory environment shaping architecture decisions — and where is it a constraint vs. a forcing function toward better design?
- What design choices are being made today to preserve strategic optionality — and what is the real cost of those choices?
Tension 3 | Concentration Risk: Strategic Standardization vs. Designed Optionality
Point: Standardizing on a small set of strategic providers accelerates delivery, lowers integration complexity, and enables depth of capability. Optionality is a cost, not a benefit, and should be treated as such.
Counterpoint: Provider concentration creates leverage, resilience, and regulatory risks that compound over time. Portability and exit paths must be designed in from the start — they cannot be retrofitted once dependencies have hardened.
What's at stake: bargaining power, continuity, regulatory posture, long-term cost structure.
Tension 4 | Sovereignty: Genuine Strategic Constraint vs. Compliance Exercise
Point: Technology sovereignty is a first-order strategic consideration. Infrastructure dependency on a small number of non-domestic providers creates risks that are not fully visible until they materialize — geopolitically, regulatorily, or operationally.
Counterpoint: Sovereignty concerns, while legitimate, are frequently overstated or used to justify protectionist choices that reduce capability and increase cost. The right response is risk-informed architecture — not reflexive localization.
What's at stake: infrastructure resilience, regulatory exposure, competitive capability, capital allocation.
Session 3 | Beyond Process — AI at the Physical & Product Layer
The core question: What does the shift from AI as an operational tool to AI as a product capability and physical systems enabler actually require?
Discussion Prompts
- Where is AI beginning to reshape the product itself — not just the process behind it?
- What new capabilities — technical, organizational, or governance — does the move from process AI to product AI actually require?
- What does "AI at the physical layer" signal for industries beyond manufacturing — relevant frontier or useful distraction?
Tension 5 | AI Ambition: Process Optimization vs. Product & Physical Transformation
Point: The near-term value of AI is overwhelmingly in process — automating workflows, reducing costs, accelerating decisions. Enterprises that chase product and physical AI before mastering process AI are overreaching their current capability.
Counterpoint: Process AI is necessary but not sufficient. The enterprises that will lead are already moving AI into the product itself and into physical systems — and the capability gap between them and process-only organizations is widening faster than it appears.
What's at stake: competitive differentiation, investment sequencing, organizational readiness for a qualitatively different kind of AI deployment.
Executive Technology Board (c)