Skip to content
5 min read AI Strategy

Whether You're Organising Humans or AI Agents, Strategic Clarity is Essential

Strategic clarity isn't just a human team problem anymore. At the speed of AI, we need it for humans and agents alike.

Whether You're Organising Humans or AI Agents, Strategic Clarity is Essential

In my last post, I argued that as the cost of software approaches zero, strategy becomes the only durable advantage. But here's what I didn't fully explore: it's not just your teams that need strategic clarity. It's your AI agents too.

We're living through a moment where every major tech company โ€” Anthropic, OpenAI, Google, Microsoft โ€” is racing to build AI agents that can act autonomously: research, write, decide, execute. Not chatbots. Not autocomplete. Agents that decompose complex goals into subtasks, delegate to specialised sub-agents, use tools, and deliver results. The architecture is real. The capability is real. And the failure rate is extraordinary.

Harvard Business Review recently found that more than 40% of agentic AI projects will be cancelled by 2027. Forrester's research is blunter: the real AI bottleneck isn't computing power โ€” it's organisational reinvention. And study after study points to the same root cause: not a technology problem, but a clarity problem.

Sound familiar?

The alignment problem isn't new. It's just louder now.

For years, I've been making the case that most organisations don't have an execution problem โ€” they have an alignment problem. Smart people, working hard, pulling in different directions because they lack sufficient shared context. Every team making strategic choices themselves, repeatedly. Every priority debate landing on individual contributors. Every tradeoff becoming a negotiation.

This is exactly what's happening with AI agents, just faster and at greater scale.

When you deploy an AI agent without clear strategic context โ€” without defining what success looks like, what tradeoffs to make, what boundaries to respect โ€” you don't get a productivity breakthrough. You get an automated mess. The agent executes perfectly. It just executes the wrong things. Or the right things in the wrong way. Or the right things for the wrong reasons.

It turns out that the same mental model that helps human teams make better decisions also works remarkably well for AI agents. Because the problem isn't intelligence โ€” human or artificial. The problem is context.

What AI agents actually need (and it's not more compute)

Here's what's striking about the latest research on multi-agent systems: the patterns that make agents effective are almost identical to the patterns that make human teams effective.

They need purpose. An AI agent without a clear objective is just an expensive random walk. The most successful agent deployments start not with "what can AI do?" but with "what decision are we trying to improve?" โ€” exactly the question organisations should be asking about their human teams.

They need strategy. Anthropic's research on multi-agent systems shows that orchestrator agents โ€” the ones coordinating other agents โ€” need explicit strategic context: what matters, what doesn't, how to allocate effort, and when to go deep versus wide. Strip that away, and agents flail, just like teams without a coherent strategy.

They need principles. In The Decision Stack, I argue that principles are the most overlooked layer of strategic alignment. They crystallise the inherent tradeoffs in your strategy and push decision-making to the edges. For human teams, that looks like "jobseekers even over recruiters" (as we said at Monster, the job board) or "self-service even over white-glove support." For AI agents, it looks like guardrails, decision boundaries, and escalation rules. Different format. Same function. Both answer the question: when you face a tradeoff, which way do you lean?

They need constraints. This might be the most counterintuitive parallel. We tend to think of constraints as limitations. But for both humans and AI agents, well-defined constraints are what enable autonomy. A team that knows exactly what they will not do is a team that moves fast on everything they do. An agent with clear boundaries doesn't need to escalate every decision or wait for permission.

Empowerment without direction isn't empowerment โ€” it's abandonment. That's true whether you're managing a product team or a fleet of AI agents.  (is it a fleet? a federation? or maybe a babble of agents?)

The Decision Stack as operating system

When Grisha Pavlotsky โ€” Chief Transformation Officer at Miro โ€” read an early version of The Decision Stack, he saw this connection immediately: "The Decision Stack is a definitive operating system for human alignment, but it is truly the 'missing manual' for the AI-native future. As we introduce more and more agentic thinking and execution into how we run our companies, the clarity that the stack provides โ€” from vision and strategy to principles โ€” will be the difference between a high-performing organisation and an automated mess."

Here's why:

The Decision Stack isn't a framework you bolt on. It's a mental model for how decisions connect โ€” from vision to strategy to objectives to opportunities to principles. Each layer answers "how?" when read top-down and "why?" when read bottom-up. When they connect, decisions become easier, and answers become more obvious. When they don't, you get noise.

For human teams, that noise looks like endless alignment meetings, relitigated debates, and the growing sense that you're busy but not actually moving.

For AI agents, it looks like hallucinated priorities, misallocated effort, and confident execution in the wrong direction.

Same disease. Same cure.

The real question isn't "what's our AI strategy?"

The organisations getting this right aren't asking "what's our AI strategy?" They're asking, "how can AI enable our strategy?" That's not a semantic difference โ€” it's a fundamental reorientation. It puts strategic clarity first and treats AI as an accelerant, not a destination.

The ones getting it wrong are doing what organisations have always done when new technology arrives: bolting it onto broken processes, hoping the technology will compensate for the clarity they lack. As one researcher put it, it's like putting a Ferrari engine on shopping cart wheels.

Dell's CTO John Roese made it even simpler: "AI is a process improvement technology, so if you don't have solid processes, you should not proceed."

If you don't have a clear Decision Stack โ€” if your teams can't trace their daily work back to your company's reason for existing โ€” then adding AI agents will amplify your confusion, not resolve it.

Building for a human + agent future

The emerging organisational models aren't "humans or agents." They're hybrid teams โ€” humans and agents working together, each doing what they do best. Humans setting direction, making judgment calls, and handling ambiguity. Agents handling execution, pattern matching, and scale.

But hybrid teams need shared context even more than purely human ones. A human team can muddle through misalignment with hallway conversations and good intentions. A fleet of AI agents operating on misaligned instructions will scale that misalignment at machine speed.

The organisations that will thrive in this future are the ones building clarity now. Not AI capability โ€” clarity. Because when you have a coherent Decision Stack, deploying AI agents becomes a question of "where in this stack can agents add the most value?" rather than "what should we do with AI?"

That's a much better question. And it's a much easier one to answer.

Here's my challenge to you

Next time you're evaluating an AI agent deployment โ€” or honestly, any new initiative โ€” run it against your Decision Stack:

If you can't answer those questions clearly for your human teams, you definitely can't answer them for your AI agents. And if you can answer them โ€” for both โ€” you're building the kind of organisation that doesn't just survive the AI transition. You'll lead it.