Innovating at the Pace of AI
Executive Technology Board | Toronto, March 31, 2026
The pace of change today is the slowest it will ever be. Competitive advantage increasingly depends on scaling AI-driven innovation and leveraging the broader ecosystem — yet execution remains the hard part: integrating models into real workflows, data, platforms, and controls at enterprise speed.
The hype cycle has matured into something more complicated: real deployments, real disappointments, and open questions about what sustainable value actually looks like at scale. The three closed sessions are designed to surface what peers are genuinely experiencing — not the polished version.
Two outcomes are the focus of the day:
- Enterprise lessons learned — repeatable patterns that increase AI innovation velocity inside large organizations, including what is working, what is not, and what operating model changes enable scale safely.
- External signals — outside-in perspectives on what is emerging in real deployments, informing where to partner ahead.
The goal is not consensus — it is collective intelligence.
Pre-Work Reflection
This brief is designed to help members arrive with sharp, specific thinking — and to make the peer exchange as substantive as possible. The quality of the conversation is proportional to the candor in the room. Please think ahead:
- An AI initiative that has generated real, measurable business value — and specifically how that is known.
- An AI initiative that underperformed or stalled — and the honest root cause.
- One architectural or organizational decision actively being wrestled with right now.
- An honest answer to: "What is the most dangerous assumption currently embedded in the AI strategy?"
Session 1 | What's Actually Working
The core question: What AI programs have generated real, measurable outcomes — and what separated them from everything else that was tried?
Discussion Prompts
- What does the most successful AI deployment in the portfolio look like — and how is value being measured in terms the board will accept?
- Where has AI genuinely surprised — delivering more value than anticipated — and what structural conditions made that possible?
- What is the single biggest internal constraint to moving faster: governance, talent, integration complexity, or executive alignment?
Tension 1 | Innovation Model: Central Platform vs. Federated Execution
Point: Centralizing the AI platform and standards creates leverage, reduces risk, and drives reuse across the enterprise. Without it, every team reinvents the wheel and risk accumulates invisibly.
Counterpoint: Federated execution is required for speed, business context, and adoption. Centralization becomes a bottleneck. The platform team becomes the queue.
What's at stake: cycle time, reuse, risk posture, accountability.
Tension 2 | Value Strategy: Concentrated Bets vs. Portfolio Experimentation
Point: Concentrating on a small number of AI programs tied directly to measurable outcomes — P&L, risk reduction, customer impact — is the only path to credibility and sustained funding.
Counterpoint: Portfolio learning requires breadth. The biggest breakthroughs emerge from many small probes, not top-down prioritization. A narrow bet portfolio misses the step-changes.
What's at stake: credibility, funding continuity, opportunity cost, organizational learning rate.
Session 2 | Architecture, Stack & Concentration Risk
The core question: Where is AI stack investment concentrating over the next 12–18 months — and what strategic risks are being designed around?
Discussion Prompts
- What does the current AI architecture look like — and what would be built differently from scratch?
- How is the build vs. buy vs. configure decision being made across different layers of the stack?
- Is portability a real design requirement or aspirational — and how are abstraction layers and exit paths being thought about?
Tension 3 | Stack Focus: AI Apps & Workflows vs. Orchestration & Foundations
Point: The advantage is shifting up the stack into AI-enabled workflows and product experiences. Business integration is the differentiator — the infrastructure layer is increasingly commoditized.
Counterpoint: Without strong orchestration, data foundations, and controls, workflow progress collapses under reliability and risk at scale. Sustainable velocity requires the foundations to be right first.
What's at stake: delivery speed vs. scalability; sustainable velocity vs. fragile demos.
Tension 4 | Concentration Risk: Standardize on Few Providers vs. Design for Optionality
Point: Standardizing on a small set of strategic providers accelerates delivery and lowers integration complexity. Maintaining optionality across providers adds cost and slows execution.
Counterpoint: Provider concentration creates resilience and leverage risks that compound over time. Optionality and portability must be designed in from the start — they cannot be retrofitted.
What's at stake: bargaining power, continuity, regulatory posture, long-term cost structure.
Session 3 | Ecosystem, Partnerships & What's Next
The core question: Where does ecosystem leverage accelerate the strategy — and where does it create dependencies that cannot be afforded?
Discussion Prompts
- What partnership models are scaling reliably — and what consistently breaks?
- Where has an ecosystem relationship created genuine competitive advantage — and where has it created a dependency that is now difficult to unwind?
- What categories of AI capability are genuinely differentiating to own internally — and where is ecosystem leverage clearly the right answer?
Tension 5 | Build vs. Partner: Internal Capability vs. Ecosystem Advantage
Point: Core AI capabilities should be built internally to protect differentiation and avoid black-box dependencies. Outsourcing critical capability is a long-term strategic risk.
Counterpoint: Time-to-advantage increasingly requires partnering. Ecosystem leverage is part of the competitive game — not a concession. Building everything internally is neither feasible nor wise.
What's at stake: speed, differentiation, dependency risk, talent strategy.
Session 4 External Perspectives: What's Coming Next
(In the works)
A curated conversation on frontier signals and emerging approaches — leveraging Toronto’s globally recognized AI research and startup ecosystem. We’ll bring in innovation and research leadership, venture investors, and startup founders to discuss:
- Where the frontier is moving and what will matter next
- What they’re building now — and what they’re seeing in real deployments
- What effective enterprise–ecosystem collaboration looks like in practice: where to partner, what to build, and how to avoid brittle dependencies
Executive Technology Board (c)