Building AI Strategy Part 3: Run to Win.
Sep 26, 2025
Everything looks brilliant on a whiteboard.
Until you build it.
In the Imagine phase, you boost your courage to explore possibilities. In the Design phase, you cut through noise by translating vision into portfolios and roadmaps.
Now you must decide — with brutal honesty — how to make these ideas real. This is the third and final act in a series that began with time travel and continued with design clarity. If you haven't read those, catch up first.
You know that AI is no longer a novelty. By the time you're reading this, 71% of companies have already piloted AI, yet only 30% feel ready to scale (Kore.ai, 2025).
That gap isn't a technology problem. It's an execution problem.
The Run phase exists to close it.
The essence of Run — brutal clarity and control
The Run phase is not an endless cycle of experiments. It is a disciplined progression from readiness to pilots to scale. I divide it into four steps. Each step culminates in a concrete deliverable and a well-defined operating rhythm. There is one list in this article. Use it as your scaffolding.
-
Assess & Align — Face your readiness gap.
-
Pilot & Measure — Run lean experiments, before you bet big.
-
Build & Integrate — Establish data, platforms, and governance that enable scale.
-
Scale & Embed — Expand what works, embed responsible AI, and transform your workforce.
Let's unpack each step with evidence, examples, and tools.
Step 1 — Assess & Align: Face your readiness gap
You can't run if you're not fit.
This step asks: Are your leaders, data, systems, and culture ready for AI? Many executives assume they are because they already have a chatbot.
Don't assume. Assess.
Begin by evaluating your AI readiness across four dimensions: leadership, data, technology, and culture. Harvard's Corporate Governance forum warns that only 45 % of leaders feel confident in their ability to transform. A readiness assessment forces you to confront whether your C‑suite has the drive, adaptability, systems thinking, and social intelligence needed. Use a simple survey or facilitated workshop. Score each dimension on a 1–5 scale. Where you score low, create explicit improvement actions — such as training, hiring, or process redesign.
On the data side, perform a data and infrastructure audit. Identify high-volume interactions, classify data types, and assess the quality and availability of data. Without stable data, you can't deliver anything. Map data sources to your prioritized use cases. Highlight gaps. Plan instrumentation or contracts to fill them.
Don't forget culture and talent. Accenture's study reveals that leaders are investing in AI, but training lags. Address this explicitly. Identify roles needed (AI architect, AI engineer, AI ops, ethical lead). Determine whether to train existing staff or hire. Create an internal learning agenda: micro‑courses, hands‑on labs, job rotations. Align incentives: tie compensation to adoption metrics, not vanity pilots.
Deliverables:
-
AI Readiness Report: a scorecard that shows your strengths and gaps across leadership, data, systems, and culture. It should call out critical weaknesses and propose corrective actions.
-
Talent & Training Plan: a concise plan for closing skill gaps, including training programmes and hiring priorities.
-
Data Blueprint: an overview of your data landscape, highlighting clean datasets ready for pilots and the instrumentation needed for others.
Assessing readiness isn't glamorous. It is necessary. Without it, you will sink money into pilots that die.
Step 2 — Pilot & Measure: Run lean experiments, measure ruthlessly
This is where you move from analysis to action.
Pilots are your proving ground. But not all pilots are created equal.
The best pilot programmes follow a simple yet powerful cycle: Test → Measure → Expand → Amplify. This AI framework shows that starting small, validating ideas, measuring against defined metrics, scaling what works, and amplifying value are the keys to success.
Test with focus. Choose two or three important use cases from your prioritized portfolio. Each should have a clear objective, a bounded scope, and a hypothesis tied to a business metric. At Deloitte, our pilots set guardrails — a responsible use policy — and established clear benchmarks for each use case. We measured the time employees spent on tasks before the introduction of AI, allowing us to quantify the improvements.
Follow that model: define baseline metrics (cycle time, error rate, satisfaction), set success thresholds, and document them.
Measure ruthlessly. Evaluate pilots based on the metrics you defined. Metrics decide whether to continue or stop. This is where many organisations fail. They let enthusiasm override evidence. If adoption is low or quality gains are marginal, kill the pilot or pivot quickly. Mercy kills save quarters and morale. Measure adoption, impact, cost, and user sentiment.
Don't confuse usage with value. Usage is a signal, not a KPI.
Expand and Amplify selectively. Once a pilot proves its worth, expand carefully. Look for use cases with significant improvements. Sometimes promising pilots fail to scale because employees aren't comfortable adopting AI. UX is also a metric.
When expanding, ask: Can we deliver the same results at 10× scale? Do we have the data and capacity? If so, integrate new features into core systems, invest in enterprise platforms, and track value (e.g., hours saved or quality uplift). Use net present value and practicality criteria to decide which expansions to prioritise.
Deliverables:
-
Pilot Roadmap Dashboard: a real‑time tracker of metrics, adoption, and spend, updated weekly.
-
Pilot Charters: short documents that state hypotheses, metrics, guardrails, and teams for each pilot.
-
Post‑Pilot Review: a short report for each pilot that includes results, decision (continue, pivot, kill), and lessons learned.
Running lean pilots is how you de‑risk big bets. It's also how you keep the board engaged. When they ask: "What moved this quarter?" you show them evidence, not hope.
Step 3 — Build & Integrate: Lay the foundation for scale
Pilots are cheap if you treat them like experiments. But when you decide to scale, your architecture, data, and governance must be solid.
Think of this step as building the runway while the plane is still taxiing.
Data & Platform. You don't need a data palace, but you do need enough infrastructure to feed your pilots and future products. The Design phase recommended an API‑first platform, a secure retrieval layer, and a lightweight evaluation suite. Now you must build or tune those components. Create a central knowledge retrieval layer so your models can access enterprise content safely.
Resist the urge to over‑engineer. Build just enough to support your prioritized use cases, and iterate.
Integration & AIOps. Nothing kills AI faster than integration headaches. Prioritize use cases with simple paths to production. That means selecting cases that can be implemented as APIs against well‑defined interfaces and existing workflows. For more complex cases, you'll need robust AIOps pipelines for versioning, testing, deployment, and monitoring. Use feature stores, model registries, and CI/CD to automate as much as possible. Document data lineage, consent, fairness checks, and audit trails. This is your Responsible AI skeleton.
Governance & Risk. Building systems without governance invites disaster. The Design article emphasised standing up a small AI Control Tower that defines standards, approvals, and shared services. In the Run phase, operationalise that. Define a RACI for the model lifecycle: who selects models, who approves training data, and who can roll back in an hour. Establish a risk register: track data privacy issues, fairness risks, and regulatory triggers. For regulated industries, link your AI governance to compliance frameworks (GDPR, HIPAA). Tie risk management to your scoring models: high‑risk use cases need stricter review and slower rollout.
Deliverables:
-
Platform & Data Blueprint: architecture diagrams and owner assignments for data sources, retrieval layers, model pipelines, and evaluation suites.
-
Integration Checklist: a step‑by‑step list ensuring each use case meets production readiness: API endpoints, documentation, and monitoring hooks.
-
Risk & Governance Register: a living document capturing risks, mitigations, compliance checks, and responsible parties.
Step 4 — Scale & Embed: Expand what works, embed responsible AI, and transform your workforce
When pilots and infrastructure are proven ready, you transition from experiments to operations. But scaling is not just replication. It's integration into everyday business.
Scale deliberately. Only 30% organizations feel ready to scale. The difference? People, process, and prioritisation. Choose one or two proven pilots to expand to new lines or geographies. Use A/B testing and feature flags to control rollout. Tie scale decisions to KPIs: cost per transaction, margin improvement, error rate.
Fund expansion in 90‑day tranches, just as you did in the Design phase. If results slip, pause and adjust.
Embed Responsible AI from Day One. The Human Agency Scale from the Imagine phase reminds us that we must choose the right level of human–AI collaboration. As you scale, design interactions where humans remain editors and decision makers for high‑stakes tasks. Document explainability requirements and ensure your models pass fairness checks. If an AI system fails, have a clear override process and established accountability.
Transform your workforce. Technology adoption without people adoption is a mirage. Accenture's 2025 survey reveals that 86% of leaders are preparing their workforce for agentic AI, yet 75% admit the pace of change is outpacing training capabilities. If you want adoption, you must invest in training and change management.
Provide role‑based training (e.g., underwriters learn how to interpret AI recommendations; call centre staff practice using AI prompts). Appoint change champions in each department. Align incentives: reward adoption and learning, not just outputs. Communicate transparently about job impacts; people prefer clarity to speculation.
Deliverables:
-
Scaled Rollout Plan: a timeline, with milestones and success metrics, for each use case moving to production.
-
Responsible AI Checklist: a set of required checks (bias, explainability, human-in-the-loop, data privacy) that every product must complete before launch.
-
Workforce Transformation Plan: training curricula, change champions, communication plan, and metrics for adoption and proficiency.
Scaling is where your strategy becomes your operating system. Do it with respect for risk, people, and evidence.
One final thought
The Run phase is neither glamorous nor optional.
Executives who skip it end up in pilot purgatory.
Executives who embrace it turn AI from a curiosity into a competitive advantage.
The steps above — assess, pilot, build, scale — are your moves on the chessboard.
You don't need a revolution. You need relentless clarity. Then deliver.
People are watching, markets are moving, and AI isn't waiting.
Your move