Back to Blog

2026: The Cambrian Explosion of AI Agents

AI Agents Leadership Strategy

I used Claude (Anthropic) to help research and write this piece. The analysis and perspective are mine, but Claude did the heavy lifting on synthesizing industry data and making my draft readable.

Introduction

The cost of building software is collapsing toward zero, and with it, every assumption about how work gets organized. In 2026, AI agents aren’t a forecast. They’re a fact: a $10 billion market, 400+ startups, code written by machines at a pace no human team could match. The Cambrian explosion isn’t a metaphor anymore. It’s the operating environment.

But explosions are indiscriminate. Most organizations are still piloting what they should be deploying, optimizing steps when they should be questioning the staircase. The gap between experimenting with AI and transforming through it is not technical. It’s structural, and it runs on trust. The companies that win this moment won’t be the ones with the most agents. They’ll be the ones that had the discipline to scope tightly, the honesty to redesign workflows from scratch, and the courage to align their people’s interests with the change instead of against it.

1. The Explosion Is Here

What we mean by “agents”

I use the word “agents” loosely and on purpose. I’m not just talking about the textbook definition of autonomous AI systems that perceive, reason, and act. I’m talking about the entire ecosystem of GenAI derivatives that are reshaping how work gets done: agents proper, custom GPTs, advanced prompts, copilots, and now vibe-coded solutions where someone with an idea and no engineering background can ship a working app in an afternoon.

That last one matters more than people realize. When the barrier to building software drops to “describe what you want,” the volume of software in the world doesn’t grow linearly. It explodes.

The numbers back it up

This isn’t a debate between optimists and skeptics anymore. The data points all in one direction. The global AI agent market crossed roughly $7.8 billion in 2025 and is projected to blow past $10.9 billion this year. CB Insights mapped over 400 AI agent startups across 16 categories. Gartner forecasts that 40% of enterprise applications will embed task-specific AI agents by year end, up from less than 5% in 2025.

On the developer side, the shift is just as measurable. GitHub reported 43 million pull requests merged per month in 2025, a 23% year-over-year jump. About 85% of developers now use AI coding tools regularly. Microsoft and Google each report that AI generates 25 to 30% of their codebases. MIT Technology Review named generative coding one of its 10 Breakthrough Technologies for 2026. The only real question left isn’t whether the explosion is happening. It’s how fast and how far it goes.

Code is becoming a commodity

Software engineer John Codes argues that AI coding agents have driven the cost of building software toward zero, predicting a 10x to 100x increase in the volume of custom software in the world. A Cummulative analysis on Substack draws a parallel to the music industry’s streaming transformation: infinite supply, near-zero marginal cost, a flood of new creators. The winners won’t be the individual app builders. They’ll be the platforms that aggregate and distribute.

Anthropic’s 2026 Agentic Coding Report paints a similar picture. The shift isn’t from bad developers to good developers. It’s from engineers writing code to engineers orchestrating agents that write code. TELUS created over 13,000 custom AI solutions while shipping code 30% faster. Zapier hit 89% AI adoption across their org with 800+ agents deployed internally.

When building software becomes that accessible, the scarce resource is no longer code. It’s distribution, data, and trust.


2. The Risks Are Real

Adoption is slower than it looks

There’s a gap between “exploring” and “operating.” Deloitte’s numbers tell the story: 30% of organizations are exploring agentic options, 38% are piloting, but only 14% have deployment-ready solutions, and just 11% are in active production. That’s a lot of PowerPoint and not a lot of production traffic.

Gartner’s own predictions capture this tension perfectly. They simultaneously forecast 40% enterprise agent adoption by end of 2026 and 40%+ agentic AI project cancellations by 2027 due to escalating costs, unclear business value, or inadequate risk controls. They call it the peak of inflated expectations. The trough of disillusionment is next. And they estimate that only about 130 out of thousands of agentic AI vendors offer genuine agentic capabilities. The rest is “agent washing,” rebranding existing chatbots and RPA tools with a fresh coat of AI paint.

Stack Overflow’s developer surveys reveal another paradox: AI tool usage is growing rapidly while trust in the technology is declining. Developers are pragmatic. They’ll use the tools. That doesn’t mean they believe the marketing.

Bottom-up change isn’t enough

Most organizations are approaching AI adoption from the bottom. Individual contributors pick up tools, see incremental productivity gains, and leadership counts it as progress. But incremental gains from associates adopting AI is not the same as rethinking workflows. It’s not rearchitecting value chains. It’s not looking at the full picture and asking: what does this process look like if we designed it from scratch today?

Are we optimizing the steps instead of questioning the staircase? The organizations pulling ahead aren’t the ones with the highest tool adoption rates. They’re the ones asking harder questions about how work itself needs to change.

PwC’s 2026 predictions make this point explicitly: winning organizations will adopt top-down, centralized AI strategies rather than crowdsourcing adoption from the edges. The transformation has to be intentional. It has to be designed.

The “someone using AI” fallacy

You’ve seen the line everywhere. “AI won’t take your job, but someone using AI will.” It shows up on LinkedIn, at conferences, in thought leadership pieces. And it gets immediate nods of agreement.

As Platforms newsletter put it, the problem is that it’s useless. It feels empowering because it gives you just enough conceptual clarity to stop asking harder questions. You conclude that if you “use AI,” you’ll be safe. But it doesn’t tell you which jobs, which AI, or what “using” even means in practice.

The real questions are structural. How does AI change the way work is organized? How does it restructure workflows, not just speed them up? What do jobs look like in the reconfigured system? These aren’t questions you answer by giving everyone a ChatGPT login. They require rethinking how organizations function. And that’s uncomfortable, because the people essential to making the change are the same people whose roles are most at stake. Trust isn’t optional here. It’s the foundation everything else depends on.


3. The Way Forward

Execute with discipline

Whether you read the bullish analysts or the cautious ones, nearly everyone agrees that 2026 is an inflection point. But “in motion” doesn’t mean “everywhere at once.” The projects that survive will be task-specific, well-scoped, and tied to concrete KPIs. Not open-ended experiments with vague “efficiency” goals.

The vendor landscape makes discipline critical. With the market saturated by rebranded legacy tools, evaluation rigor is survival. If your “agent” can’t act autonomously on a well-defined task, it’s a chatbot with better branding. Start narrow, measure ruthlessly, and scale what works.

And don’t treat this as a technology project. Technology delivers roughly 20% of an AI initiative’s value. The other 80% comes from redesigning workflows, upskilling teams, and building governance. IBM and Deloitte both predict the competitive differentiator has shifted from individual models to orchestration: combining models, tools, and workflows into cohesive systems. Governance isn’t overhead. It’s what lets you move faster. Organizations with mature frameworks deploy agents in higher-value scenarios more confidently, creating a virtuous cycle of trust and capability expansion. Think systems, not models.

Build trust, or nothing else matters

Here’s the tension at the center of all of this: the people most essential to making AI transformation work are the same people whose roles are most affected by it. You can’t ask someone to enthusiastically redesign their own job if they believe the end goal is to eliminate it. And you can’t fake your way through this. People can tell when “upskilling” is just a polite word for “we’re phasing you out.”

So how do you build trust in the middle of that kind of change?

Training is the obvious starting point, but it has to be real. Not a lunch-and-learn on prompt engineering. Real investment in helping people develop skills that make them more valuable in the new landscape, not less. Developers shifting from writing code to orchestrating agents. Analysts shifting from pulling data to validating AI outputs and designing the questions. Managers shifting from directing tasks to designing systems. These aren’t lateral moves. They’re genuine skill expansions, and they need to be treated that way: with time, resources, and recognition.

But training alone isn’t enough. People need to see that their interests and the organization’s interests are aligned. That means equity in outcomes. If AI-driven productivity gains translate entirely into headcount reduction, the message is clear regardless of what leadership says. If instead those gains translate into higher-value work, better compensation, and career growth for the people who made it happen, you’ve got alignment. You’ve got people who want the transformation to succeed because they benefit from it.

The organizations that get this right won’t just have better AI. They’ll have better teams, because trust compounds the same way distrust does.

The explosion goes physical

Everything we’ve discussed so far lives in the digital world: software agents, coding copilots, knowledge work automation. But the Cambrian metaphor doesn’t stop at the screen.

ROBO Global, ETF Trends, and UC Berkeley’s Sutardja Center all argue that 2026 is the year AI escapes the data center and enters physical spaces. We’re seeing predictions of a potential U.S. National Robotics Strategy recognizing automation leadership as strategically critical. Drones are moving beyond monitoring into operational tasks. Consumer-facing AI agents are beginning to manage calendars, digital clutter, and repetitive tasks in ways that blur the line between software and physical assistant.

This isn’t just a technology expansion. It’s a category change. When agents gain physical form, they encounter a world that is messier, more variable, and more consequential than any codebase. The governance questions that feel manageable for software agents become urgent when those agents operate machinery, navigate public spaces, or interact with people face to face. UC Berkeley’s analysis raises a provocative question: what happens when autonomous systems possess qualities “just human enough” to complicate purely utilitarian treatment, even without sentience?

We’re not there yet. But the trajectory is clear, and the organizations thinking about this now will be better positioned than the ones caught off guard. The Cambrian explosion started with code. It won’t end there.

Conclusion

The Cambrian explosion didn’t just produce more organisms. It produced entirely new ways of being alive. That’s what’s happening now. Not more software, but a fundamentally different relationship between people and what they can build. The barriers that once separated “having an idea” from “shipping a solution” are dissolving, and on the other side is a world where more people get to create, not fewer.

This is the part that gets lost in the market projections and the hype cycle charts. The real promise of this moment isn’t efficiency. It’s expansion. More people thinking bigger, building faster, solving problems that used to require a department and a budget they’d never get. Yes, the risks are real. Yes, discipline matters. But underneath all the noise, something genuinely beautiful is taking shape: a future where the limiting factor on what you can accomplish is no longer access to technical skill. It’s the quality of your ideas and the depth of your conviction. That future is worth building well.

Share this article