Skip to content
10 min read AI Strategy

When Software Becomes Cheap, Strategy Becomes Everything

Most companies are applying AI at the wrong level. When software gets cheap, the moat isn't technology โ€” it's strategic clarity.

When Software Becomes Cheap, Strategy Becomes Everything

I gave a talk last week about whether AI is overhyped or underhyped. The short answer: it's both. The longer answer โ€” and the one that actually matters for product, engineering, and design leaders โ€” is that we're asking the wrong question entirely.

The panic is real. A single viral LinkedIn post about "Something Big is Happening" helped wipe billions off SaaS valuations. A macro research shop most people had never heard of published a blog post about a hypothetical AI-driven recession and hundreds of billions disappeared from the stock market. The hype machine is working in reverse: panic, not based on fundamentals, but on vibes.

But here's the thing โ€” both the hype and the panic are focused on the wrong timeframe. They're obsessing over what AI does now, when the real story is about what happens when the cost of intelligence approaches zero.

Let me explain why that changes everything.

Hinton Gave Radiologists Five Years. They Got Abundance.

In 2016, Geoffrey Hinton โ€” the Godfather of Deep Learning, future Nobel Prize winner โ€” told an audience that we should stop training radiologists. Within five years, he said, deep learning would do better than any human at reading medical images.

He wasn't wrong about the technology. Today there are over 1,000 FDA-cleared AI radiology devices. AI mammography screening detects 29% more cancers while cutting radiologist workload by 44%.

So what happened to all those radiologists?

They're doing better than ever. Record salaries โ€” over $525,000 on average, up 44% from when Hinton made his prediction. The Mayo Clinic alone double the number of radiologists on staff. And the US is facing its largest radiologist shortage in history.

AI didn't replace radiologists. It made imaging so much faster and cheaper that we started doing vastly more of it. More scans mean more findings. More findings mean more follow-up, more treatment planning, and more complex interpretation. More demand for radiologists, not less.

This has a name. In 1865, William Stanley Jevons noticed that James Watt's steam engine made coal dramatically more efficient โ€” and coal consumption went up, not down. The Jevons Paradox: When you make something cheaper and more efficient, you don't just do the same amount of it for less money. You do vastly more of it.

And there's a parallel from futurist Roy Amara: we tend to overestimate the short-term impact of new technologies while underestimating their long-term effects.

That's the tension. AI is simultaneously overhyped in the short term and massively underhyped in the long term. The difference is where you're looking.

Follow the Cost Curve

Google made the same bet in 2004. When Gmail launched with 1GB of free storage, competitors were offering 2-4 megabytes. People literally thought it was an April Fools' joke. But Google wasn't betting on the storage costs of the day โ€” they were betting on tomorrow's. Storage was around $12 per gigabyte in 2000. By the time Gmail launched, it was closer to a dollar. Today it's about a penny.

They bet on the cost curve. The cost curve won.

Now ask yourself the same question about AI. What if models become commodities? What if the cost of a token goes to zero? What if the cost of software itself goes to near-zero?

These aren't hypotheticals โ€” this is the trajectory we're on. Sam Altman has said that the cost of a given level of AI capability drops roughly 10x every 12 months. And we already know what happens when costs collapse: the Jevons Paradox. Demand explodes. New markets open. Value gets created in places nobody expected.

The demand for solving human problems is essentially infinite. When you reduce the cost of solving one problem, you don't run out of problems โ€” you just unlock the next layer.

That's the underhyped story. Not replacement. Not less. Abundance.

Stop Sprinkling AI on Everything

So if the long-term story is abundance, what should product leaders actually do about it? This is where I see most companies getting it catastrophically wrong.

Too many product teams right now are flailing around at the bottom of the stack โ€” chasing AI features, bolting on chatbots, optimising prompts. They're treating AI as a feature when they should be treating it as a force that reshapes their strategy.

If you know the Decision Stack, you know that every organisation makes decisions at five levels: Vision, Strategy, Objectives, Opportunities, and Principles. Each level answers a question, and each answer should flow coherently from the level above.

Here's what I'm seeing: most companies are applying AI only at the Opportunities layer. New feature here, AI integration there. That's fine โ€” but it's playing at the edges. It's the equivalent of the media businesses scrambling to replace their SEO strategy with "GEO" (Generative Engine Optimisation) โ€” literally trading one platform dependency for another. That's what short-term thinking looks like.

The companies that will win are the ones rethinking strategy. Going further up the stack and asking: given what AI makes possible, does our business model still make sense? Can we solve customer problems in a fundamentally different way?

Intercom Went Up the Stack

Intercom is my favourite case study for this. When they realised what generative AI would do for customer service, they could have done what everyone else did โ€” bolt a chatbot onto their existing product, keep the per-seat pricing model, and hope for the best.

Instead, they reinvented.

They built Fin โ€” an AI agent that now resolves over a million customer tickets a week. But the real move wasn't the technology. It was the business model. They moved from per-seat pricing to $0.99 per resolution โ€” paying only when the AI actually solves a customer's problem. Outcome-based pricing, backed by a $1M performance guarantee.

Think about what that means strategically. Their old model: more customer problems equals selling more seats. Misaligned incentives. Their new model: better resolution equals more revenue. The product has to work. Fin went to $100M+ ARR in less than a year.

That's not an AI feature. That's a strategy-level rethink. They went up the Decision Stack and asked: What would our business look like if we built it from scratch today?

Kive Killed Their Own Product

I saw the same thing play out with Kive, a startup we invested in back in 2020 at EQT Ventures. The founder, Olof โ€” a former film director โ€” started Kive as a smarter way for creatives to organise asset libraries from big production shoots. They were using machine learning (remember AI before AI was cool?) to speed up asset management during post-production.

Then ChatGPT landed. Olof spent the weekend playing with it, gathered his whole team, and threw out the product they'd built. He'd realised that generative AI meant the shoot didn't have to happen at all. The whole workflow โ€” planning, shoot, post-production, asset management, delivery โ€” could collapse. Customers could just generate what they needed. Today Kive helps brands like Polestar generate campaign assets from their desk โ€” imagine the cost savings compared to flying a prototype car to South Africa for a two-week location shoot. The product, the market, and the business model all changed. But the vision โ€” helping creatives and brands produce better work โ€” stayed the same.

In both cases, the founders didn't ask "How do we add AI to what we already do?" They asked, "What becomes possible that wasn't before?" That's the difference between playing at the Opportunities layer and rethinking at the Strategy layer.

What Becomes the Moat When Software Gets Cheap?

This brings us to the question that's keeping SaaS founders up at night: if AI can build a "good enough" version of my product in a weekend, what's my moat?

I explored this in depth in my post on Fast Fashion SaaS โ€” the era where software gets built fast, breaks fast, and gets abandoned fast. The parallel with fast fashion is striking: barriers to entry evaporate, competition explodes, and distribution channels get flooded with AI-optimised noise.

But here's what the "SaaS is dead" crowd keeps missing. We're in a tech bubble โ€” and I mean that literally. Out of 8.1 billion people on this planet, 84% have never used AI. Only 0.3% have paid for it. Only 0.04% have tried AI coding. We are a tiny sliver of humanity talking to ourselves about how everything is about to change.

Most of the world doesn't want to build software. They want to do the thing that software enables. A restaurant manager isn't going to build their own point-of-sale system, payment stack, and inventory management โ€” no matter how good AI coding tools get. They want something easy to use, customisable, and out of the way so they can focus on serving customers. The same goes for the accountant, the estate agent, the logistics coordinator, and the vast majority of people who use software every day.

There's also a crucial distinction between probabilistic and deterministic. AI agents are amazing, but they're not right for every use case, and they're still expensive at scale. A lot of the time, boring deterministic software is the right call.

So SaaS isn't dead. I believe we're about to see an explosion of SaaS โ€” we just don't know who the winners and losers are yet. Internal tooling for tech companies is probably at risk. Core value drivers, business-critical systems, and "consumer" SaaS probably aren't. Competition will intensify everywhere, and every niche is suddenly competitive. But that's abundance, not death.

In fact, I think we might be heading toward a genuine software renaissance. Think about what becomes possible when software is cheap to build: probabilistic/deterministic hybrids that blend the reliability of traditional software with the intelligence of AI. Hyperniche software that serves tiny, specific markets that were never economically viable before. Malleable software that reshapes itself around how each customer actually works. Self-evolving software that improves continuously from usage patterns. Agents, obviously โ€” but also software for agents, which is an entirely new category nobody is really talking about yet.

It's all software. And it all needs people who understand what to build, for whom, and why.

The moat isn't AI capability โ€” that's becoming table stakes. The moat is knowing which problems are worth solving and having the strategic clarity to solve them better than anyone else. As Fujifilm showed when they survived what killed Kodak, the companies that navigate disruption aren't the ones with the best technology. They're the ones with the clearest strategy.

Speed Was Never the Problem

One more thing, because this keeps coming up. A lot of the AI conversation right now is about speed. Ship faster, iterate faster, build faster. AI will make everything faster.

Hot take: speed was never the problem.

Speed without direction is just burning jet fuel on a runway. Spectacular, but wasteful. Velocity โ€” speed with direction โ€” is putting that fuel in an engine and using it to get somewhere.

AI gives you speed. The Decision Stack gives you direction. You need both.

And direction comes from the market โ€” from user research and insights feeding into every level of the stack. Speed plus direction plus discovery means moving up the stack. Not just shipping faster at the bottom, but making better strategic decisions at the top.

If you want empowered teams โ€” and you should โ€” then AI accelerates the cost of getting direction wrong. When teams can build and ship in hours instead of weeks, a bad strategy doesn't just waste a quarter. It wastes dozens of failed experiments running at full speed in the wrong direction. The premium on strategic clarity goes up, not down.

This is what I mean when I say the bottleneck was never execution โ€” it was always clarity. Most organisations don't fail because their teams can't build. They fail because their teams don't know what to build, or why. AI makes the building faster, which is wonderful โ€” but it also means the gap between a clear strategy and a muddled one shows up faster, compounds faster, and costs more.

And here's something counter-intuitive that one of the early reviewers of my book pointed out: the strategic clarity provided by the Decision Stack isn't just important for the humans as AI makes everything faster โ€” it's crucial for the AI agents themselves. If you think humans can run in the wrong direction quickly with AI, watch how fast it all falls apart when agents do. Agents need context, constraints, and clear objectives to operate effectively. Without strategic clarity at every level of the stack, you're not just dealing with misaligned teams โ€” you're dealing with misaligned teams plus misaligned agents, all compounding each other's mistakes at machine speed. A clear Decision Stack becomes the operating system for your entire organisation, human and AI alike.

Build for Antifragility

Nassim Taleb has this concept of antifragility โ€” systems that don't just survive shocks but actually get stronger from them. Wind extinguishes a candle but energises a fire. The question is whether your organisation is the candle or the fire.

We don't know exactly how AI plays out. Nobody does โ€” even Nobel Prize winners get it wrong. So don't bet everything on one prediction. Especially not mine. Build for optionality. Protect your core, but make small bets on radically different approaches. That's Taleb's barbell strategy, and it maps directly onto the Decision Stack: the top of the stack gives you direction, the bottom gives you room to experiment.

Intercom didn't predict the future of customer support. They built an organisation that could benefit from whichever future arrived. That's antifragile.

The Right Questions

So here's my challenge: stop asking "how do we add AI to what we already do?" and start asking:

Can we solve the customer problem in a completely new way? Not incrementally better. Fundamentally different. The way AI-powered imaging didn't just make radiology faster โ€” it made preventive screening possible at scale.

What wasn't feasible before but now is? Product has always been good at managing feasibility risk. But that calculus has changed. The ideas on your "technically impossible" shelf โ€” revisit them.

Can our business model change completely? Like Intercom moving from seats to resolutions. What does outcome-based look like in your world?

What demand is currently locked up that AI could unlock? The Jevons Paradox question. What would your customers do with your product if it were 10x cheaper, 10x faster, 10x more accessible?

Every major technology shift follows the same pattern: predictions of doom, short-term disruption, long-term abundance. We're somewhere between steps one and two right now. The temptation is to stay focused on the disruption โ€” to panic about what AI takes away.

But the product leaders who will define the next decade are the ones who focus on the abundance. Not "how do we survive AI?" but "what can we build that was impossible before?" Not "how do I add AI to what we already do?" but "how do I go up the Decision Stack and rethink what we do entirely?"

We can't predict the future. There will be change. There will be disruption. It will happen faster than any technology change ever has. I don't want to minimise the potential impact on real jobs and lives. But there's a wide gap between "one founder and an army of agents" and needing 10,000 employees to run a mid-sized SaaS company. The reality for most of us will land somewhere in between โ€” and that's where the opportunity lives.

Hinton gave radiologists five years. They got abundance instead. What will you do with yours?