Discussion Brief

Innovating at the Pace of AI

Executive Technology Board — CIO Roundtable | Toronto, March 31, 2026

Pre-Work Reflection

This brief is a preparation guide, designed to help members arrive with sharp, specific thinking — and to make the peer exchange as substantive as possible. The strongest discussions happen when members can offer a specific example, not a general position. The quality of the conversation is proportional to the candor in the room.

Come prepared with a concrete response to each of the following:

  • An AI initiative that has generated real, measurable business value — and specifically how that is known.
  • An AI initiative that underperformed or stalled — and the honest root cause.
  • One architectural or organizational decision actively being wrestled with right now.
  • An honest answer to: "What is the most dangerous assumption currently embedded in the AI strategy?"

The Context

The pace of change today is the slowest it will ever be. Competitive advantage increasingly depends on scaling AI-driven innovation and leveraging the broader ecosystem — yet execution remains the hard part: integrating models into real workflows, data, platforms, and controls at enterprise speed.

The hype cycle has matured into something more complicated: real deployments, real disappointments, and open questions about what sustainable value actually looks like at scale. The three closed sessions are designed to surface what peers are genuinely experiencing — not the polished version.

Two outcomes are the focus of the day:

  • Enterprise lessons learned — repeatable patterns that increase AI innovation velocity inside large organizations, including what is working, what is not, and what operating model changes enable scale safely.
  • External signals — outside-in perspectives on what is emerging in real deployments, informing where to partner ahead.

How the Sessions Are Structured

Each closed session is anchored by one or two strategic tensions, framed as point vs. counterpoint. Both sides have serious defenders in this room. The goal is not consensus — it is collective intelligence.

Opening | Pulse Round

"What is the most dangerous assumption currently embedded in our AI strategy?"

One thought per member. No slides. 30–45 seconds.

Session 1 | What's Actually Working

The core question: What AI programs have generated real, measurable outcomes — and what separated them from everything else that was tried?

Discussion Prompts

  • What does the most successful AI deployment in the portfolio look like — and how is value being measured in terms the CFO and board will accept?
  • What made the difference between pilots that scaled and pilots that stalled?
  • Where has AI genuinely surprised — delivering more value than anticipated — and what structural conditions made that possible?
  • What metrics are being used to measure AI value — and does the organization actually believe them?
  • What is the single biggest internal constraint to moving faster: governance, talent, integration complexity, or executive alignment?

Tension 1 | Innovation Model: Central Platform vs. Federated Execution

Point: Centralizing the AI platform and standards creates leverage, reduces risk, and drives reuse across the enterprise. Without it, every team reinvents the wheel and risk accumulates invisibly.

Counterpoint: Federated execution is required for speed, business context, and adoption. Centralization becomes a bottleneck. The platform team becomes the queue.

What's at stake: cycle time, reuse, risk posture, accountability.

Tension 2 | Value Strategy: Concentrated Bets vs. Portfolio Experimentation

Point: Concentrating on a small number of AI programs tied directly to measurable outcomes — P&L, risk reduction, customer impact — is the only path to credibility and sustained funding.

Counterpoint: Portfolio learning requires breadth. The biggest breakthroughs emerge from many small probes, not top-down prioritization. A narrow bet portfolio misses the step-changes.

What's at stake: credibility, funding continuity, opportunity cost, organizational learning rate.

Session 2 | Architecture, Stack & Concentration Risk

The core question: Where is AI stack investment concentrating over the next 12–18 months — and what strategic risks are being designed around?

Discussion Prompts

  • What does the current AI architecture look like — and what would be built differently from scratch?
  • How is the build vs. buy vs. configure decision being made across different layers of the stack?
  • What concentration risks are considered material — and what concrete design choices have been made in response?
  • Is portability a real design requirement or aspirational — and how are abstraction layers and exit paths being thought about?
  • When does under-investment in foundations become a ceiling on scale — and how close is that ceiling?

Tension 3 | Stack Focus: AI Apps & Workflows vs. Orchestration & Foundations

Point: The advantage is shifting up the stack into AI-enabled workflows and product experiences. Business integration is the differentiator — the infrastructure layer is increasingly commoditized.

Counterpoint: Without strong orchestration, data foundations, and controls, workflow progress collapses under reliability and risk at scale. Sustainable velocity requires the foundations to be right first.

What's at stake: delivery speed vs. scalability; sustainable velocity vs. fragile demos.

Tension 4 | Concentration Risk: Standardize on Few Providers vs. Design for Optionality

Point: Standardizing on a small set of strategic providers accelerates delivery and lowers integration complexity. Maintaining optionality across providers adds cost and slows execution.

Counterpoint: Provider concentration creates resilience and leverage risks that compound over time. Optionality and portability must be designed in from the start — they cannot be retrofitted.

What's at stake: bargaining power, continuity, regulatory posture, long-term cost structure.

Session 3 | Ecosystem, Partnerships & What's Next

The core question: Where does ecosystem leverage accelerate the strategy — and where does it create dependencies that cannot be afforded?

Discussion Prompts

  • What partnership models are scaling reliably — and what consistently breaks?
  • Where has an ecosystem relationship created genuine competitive advantage — and where has it created a dependency that is now difficult to unwind?
  • What categories of AI capability are genuinely differentiating to own internally — and where is ecosystem leverage clearly the right answer?
  • How is partnership risk being tracked and governed — and at what level does it become a board conversation?
  • As the afternoon's external guests are considered: what question most needs an outside-in perspective today?

Tension 5 | Build vs. Partner: Internal Capability vs. Ecosystem Advantage

Point: Core AI capabilities should be built internally to protect differentiation and avoid black-box dependencies. Outsourcing critical capability is a long-term strategic risk.

Counterpoint: Time-to-advantage increasingly requires partnering. Ecosystem leverage is part of the competitive game — not a concession. Building everything internally is neither feasible nor wise.

What's at stake: speed, differentiation, dependency risk, talent strategy.Decision frame: build where differentiation is real; partner where scale and pace matter — the hard question is who decides which is which, and how often that gets revisited.

Session 4 External Perspectives: What's Coming Next

(In the works)

Executive Technology Board (c)