CONTACT

You're Dating Your AI Strategy Wrong

May 11, 2026

You have a Chief AI Officer. A roadmap. A pilot running somewhere in the organization.

You've been to the conferences. You've read the reports. You're doing the thing.

And yet nothing has fundamentally changed.

No new revenue. No meaningful cost reduction. No operational capability you didn't have two years ago. Just a longer list of initiatives and a shorter runway of board patience.

Here's what I think is actually happening.

Your organization has developed an avoidant attachment style toward AI.

Not hostility. Not ignorance. Avoidance. You engage just enough to appear committed. You pull back the moment things get serious. You mistake distance for strategic patience and optionality for sophistication. The result looks like an AI strategy. It isn't.

BCG surveyed 1,250 executives in September 2025. Only 5% of companies are generating substantial AI value at scale. The remaining 60% report minimal gains despite continued investment. Not because of technology. Technology is widely available, well-documented, and increasingly affordable. The gap is behavioral. Most organizations are running from the very thing they claim to be building toward.

Avoidant attachment in relationships follows a recognizable pattern. Attraction at the start. Enthusiasm, even. Then, as closeness becomes real and commitment requires something, a slow retreat. More options. Less presence. A preference for the idea of the relationship over the work of it.

Watch your AI portfolio. Tell me you don't recognize it.

Here's how the pattern shows up. Five ways your organization is dating its AI strategy wrong.

 

1. You're selecting for attraction, not compatibility

You pick AI initiatives based on surface signals.

A flashy vendor demo. A use case that sounds impressive in a board presentation. A technology your competitor mentioned in their annual report. You're optimizing for novelty, not fit.

Real AI strategy starts with an uncomfortable audit. What does your data actually look like, not in the slide deck, in the systems? What is your organization's real tolerance for process change? How many times have you successfully operationalized something new at scale in the last three years?

Most companies skip this entirely. It's uncomfortable. It requires honesty about capability gaps you'd rather not name publicly.

So instead, you fall for the pitch.

Attraction without compatibility is how you end up with a €2M computer vision project in a business that doesn't have clean inventory data. I've seen it more than once. The initiative looked great on paper. It was never going to work in practice. Nobody asked the right questions before signing.

Avoidants keep things surface-level because depth implies commitment. Your vendor selection process does exactly the same thing.

 

2. You're avoiding the hard conversations

Every serious relationship has a moment where you stop performing and start being honest.

Your AI strategy needs that moment. Most organizations never have it.

Let's look, for example, at "kill criteria". Who owns this after go-live? What happens to the team that built it if it fails? How you'll measure success in ways that connect to actual business outcomes and not AI outputs. These are the conversations that determine whether an initiative survives contact with reality.

They're also deeply uncomfortable. They require people to put their names next to predictions. To say out loud that this might not work. To define failure before experiencing it.

Avoidants don't do vulnerability. They intellectualize. They run another workshop. They commission a framework that defers the moment of honest reckoning by another quarter.

A subtracting strategy. Pulling back precisely when closeness is required.

Kill criteria is the discipline of deciding in advance what would cause you to stop. That's the tool most organizations treat as pessimism. It's the opposite. It's the only thing that protects you from sunk cost thinking two years into an initiative that stopped making sense six months ago.

Skip the hard conversations, and you're not building a strategy. You're building a performance of one. A monologue

 

3. You're collecting pilots instead of committing

Running your fifth concurrent AI proof-of-concept isn't a strategy. It's avoidant behavior with a budget attached.

The logic sounds reasonable. Test before you scale. Reduce risk. Learn fast. All true when applied to one or two focused experiments with clear success criteria and a real decision point at the end.

That's not what most organizations are doing.

McKinsey surveyed 1,993 executives across 105 countries in late 2025. Only about one-third have begun scaling it across the enterprise. Begun means anything. The remaining two-thirds are still experimenting. Still piloting. Still deciding.

They're accumulating pilots the way avoidants accumulate almost-relationships. Each one feels like progress. None of them goes anywhere.

Keeping options open is the avoidant's core operating mode. A portfolio of perpetual pilots is how that instinct expresses itself in a strategy document.

Commitment in AI means making structural changes that are hard to undo. How do you govern data? To which processes do you redesign around AI capability? How do you hire and develop people? These changes are expensive, visible, and irreversible. That's exactly what makes them credible.

You haven't lost much. You haven't gained much either.

 

4. You're watching everyone else's relationship

Your competitor announced an AI transformation. A peer mentioned their GenAI deployment at a conference. A consulting firm published a benchmark showing your industry's AI maturity score.

Now you're second-guessing your roadmap.

You're measuring your AI strategy against someone else's Instagram feed. Reacting to the performance of the strategy, not to evidence of results.

Most public AI announcements are marketing. The numbers that get cited, cost savings, efficiency gains, productivity improvements, are almost never audited, rarely replicated, and almost always measured over timeframes and under conditions that don't transfer.

Using external reference as an exit ramp from internal clarity is another avoidant pattern. If you're always adjusting to what others are doing, you never have to fully commit to a direction of your own.

I've met executives who pivoted their entire AI agenda because a competitor made a splash at Davos. Six months later, the competitor had quietly shelved the initiative. The executive was still chasing it.

The right AI strategy looks like yours. Not theirs.

 

5. You don't know what you actually want

The root of everything above.

Bad daters don't fail because they lack options. They fail because they lack clarity. The same is true here. Every mistake I've described, the wrong selection criteria, the avoided conversations, the pilot accumulation, the competitive mimicry, all become inevitable when you start without knowing what you actually need.

McKinsey's 2025 data makes this visible in a single number. 64% of organizations say AI is enabling their innovation. Only 39% report any EBIT impact at the enterprise level. That 25-point gap is the distance between sensing that something is working and knowing what you actually want from it. And remember, even this 39% is questionable.

Not defining your needs is a protection strategy. If you don't say what the business requires, nobody can hold you to it. Ambiguity feels like flexibility. In practice, it's paralysis with better branding.

Before you select a use case, before you talk to a vendor, before you open a discovery workshop, answer one question: what problem, if solved, would materially change the trajectory of this business?

Not "how can we use AI?" That question always generates a list. Lists are not strategies.

The right question forces a choice. Choices require the kind of clarity most organizations haven't done the work to develop. Know what you're building toward before you start building.

 

What secure attachment actually looks like 

Secure attachment in relationships isn't the absence of difficulty. It's the capacity to stay present through it. To have the hard conversations. To choose deliberately rather than react to fear.

Secure attachment to the AI strategy looks like this:

  • You know what you want.

  • You've said it out loud in a room where people can disagree with you.

  • You've picked one or two initiatives that fit your capability and your strategic position.

  • You've defined what failure looks like before you've experienced it.

  • You're not adjusting your posture every time a competitor makes a press release.

5% who get this right show a consistent pattern. They're not smarter. They don't have better models. The difference isn't intelligence.

It's commitment.

You're building something you're not planning to leave.

It's the operational discipline that separates organizations generating real value from AI from the ones still running their fifth pilot.

Your move.

 


Sources:

BCG "The Widening AI Value Gap: Build for the Future 2025" (September 2025, n=1,250)

McKinsey "The State of AI in 2025: Agents, Innovation, and Transformation" (November 2025, n=1,993)

You mightĀ also like...

2028: AI's Reasoning Leap Is Coming. Your Strategy Isn't Ready.

67% Chose the Electric Shock.

Why Your AI Strategy is Broken: The Case for Subtraction.