2028: AI's Reasoning Leap Is Coming. Your Strategy Isn't Ready.
Apr 23, 2026
AI scored 9% on the hardest exam ever built.
Fifteen months later — 46%.
Not a trivia quiz. Not a coding challenge. Questions designed by the world's top minds across more than 100 disciplines. Questions written specifically to stump frontier AI models. Questions that real PhDs disagree on.
Most leaders read that and nod. Then go back to their AI pilot programs. Their vendor evaluations. Their quarterly roadmaps.
2028 will change that.
Not because the technology will force you. Because your competitors who started imagining earlier will have already moved.
What Is Humanity's Last Exam — And Why Should You Care?
In late 2024, a group of AI safety researchers and Scale AI built a benchmark called Humanity's Last Exam.
The name was not marketing. It was the thesis.
They wanted a test that would stay ahead of AI indefinitely. A test so hard, so broad, so expert-level that no model could touch it for years. Questions from pure mathematics to ancient languages. From molecular biology to philosophy of mind. Over 2,500 questions. More than 100 subjects.
At launch, the best models barely scraped past random guessing. Today, internal previews from frontier labs already report 56% to 64%.
Current trajectory? 80-90% accuracy before 2030.
Let that settle.
A benchmark designed to be humanity's last — the test AI could not pass — is on track to be half-solved this year and mostly solved before the end of the decade.
This is not a story about a benchmark. This is a story about the speed of capability growth that your strategy is not prepared for.
The Pace Is the Point.
9% to 46% in 15 months.
That is not linear progress. That is not the pace of software updates or product cycles. That is a capability curve bending faster than almost every forecast predicted.
And the public leaderboard is the slowest number. What labs are running privately is already ahead of what gets published.
Now apply that to your own assumptions.
The Three AI Waves I wrote about last year assumed we had years between automation and augmentation. We don't. We are crossing that threshold now. Wave 2 is not two years out. It's here, behind closed doors, in preview.
So answer honestly.
What were your assumptions about AI capability when you built your 2025 strategy?
Are they still valid?
If your plan depends on AI being a faster autocomplete — you are building for a world that already ended.
What "Reasoning" Actually Changes?
There is a difference between AI that retrieves and AI that reasons.
Retrieval is a search engine with better manners. It finds information, summarizes it, returns it. Useful. Limited. Your analysts already do this, faster.
Reasoning is different. Reasoning is what an expert does when the textbook ends. When the problem is new. When two authoritative sources disagree, andsomeone has to decide.
That is what Humanity's Last Exam measures. Not knowledge. Judgment under ambiguity.
And AI is now scoring closer to the expert than to the novice.
What happens in your business when judgment — the thing you pay your senior people for — stops being scarce?
The expert does not disappear. The scarcity of expertise does.
This is the shift most leaders are missing. They are preparing for AI to replace tasks. The real disruption is AI replacing the moat around expertise itself.
Leaders who built authority on knowing will need to rebuild it on judging. On taste. On accountability. On the kind of decisions no model will ever be allowed to make alone.
Think of freediving. At depth, lung capacity is not what keeps you alive. Calm judgment under pressure does. The ability to read signals your body is sending and decide — in one second — whether to continue or turn back. I wrote about this parallel in Lessons from the Abyss.
The same shift is coming for executives. The technical edge flattens. The decision-making edge becomes everything.
Are you training for that? Or still optimizing for the old moat?
Why Most Strategies Won't Survive 2028?
Most AI strategies I see are built on 2023 assumptions: AI as a tool. Humans as decision-makers. AI as a cost-saver.
That is a strategy for a world where AI helps. Not a world where AI reasons.
The problem is not that you added AI. The problem is that you did not subtract what AI made redundant. Meetings that exist to align on information AI could align in seconds. Roles built around the knowledge AI now holds. Approval chains that protect against mistakes AI no longer makes.
Subtractive Strategy. That is where value gets created. Not in stacking tools. In removing the work that should no longer exist.
Three blind spots I see in almost every C-suite:
One. Treating AI capability as linear when it's an S-curve. You cannot plan for exponential change with quarterly assumptions.
Two. Measuring AI ROI on savings instead of strategic positioning. If your AI investment is justified by headcount reduction, you are optimizing the old business. Not building the new one. I made the full case in Stop Measuring AI on Savings.
Three. Delegating AI strategy to a CAIO instead of owning it at the top. I wrote a full piece on why you should fire your Chief AI Officer. Short version — AI strategy is a business strategy. It cannot be delegated to a function.
None of these survives 2028.
What to Do Before 2028?
This is where most C-suites get it wrong.
My framework is simple: Imagine. Design. Run.
Most leadership teams skip Imagine entirely. They jump straight to roadmaps, vendor selections, and technical architecture. They call it an AI strategy.
It's not. It's a procurement plan.
The Imagine step is where strategy lives or dies. And it is a step only the C-suite can take. Not your CTO. Not your CAIO. Not your consulting firm.
Real AI strategy starts with one hard question at the top: What does our business look like if AI reasons at expert level across our core functions?
Not a technology question. A business question. An identity question.
Without that answer, Design has no direction. Run has no north star. You end up with a portfolio of pilots that prove nothing and change less. If you are about to launch your first agent without that clarity, read Before You Build Your First AI Agent before anything else.
So ask yourself honestly.
Has your leadership team spent even one session on this? Not a vendor demo. Not a proof of concept review. A real strategic conversation about what your business is for in a world where expertise is no longer scarce.
Most haven't.
Once Imagine is done, Design becomes obvious. Which parts of your business model depend on human expertise as a moat? Which don't? Where does judgment still create value? Where does it just create cost? That is where your roadmap starts. Not with tools. With answers.
Then Run. One real strategic probe. Not a pilot. Not a proof of concept. A test of your core assumptions about where you create value. Small enough to move fast. Serious enough to learn from. This is how market leaders outpace the competition with agentic AI — not with more tools, but with sharper questions.
The leaders winning in 2028 are not the ones with the best AI tools. They are the ones who asked the right questions in 2025.
And most of their competitors are still in procurement.
Close.
Humanity's Last Exam was not supposed to be solvable this fast.
Neither is the disruption coming for your industry.
The question is not whether AI will pass. It already is.
It's whether your leadership team has sat down to imagine what that means.
Not your IT department.
Not your innovation lab.
You.
The people who own the strategy.
Most haven't.
That is not a technology problem.
That is a strategy problem.
Your move.