Introduction
Clayton Christensen’s The Innovator’s Dilemma starts with an uncomfortable truth: great companies often fail not because they’re poorly managed, but because they’re well managed. They listen carefully to their best customers, optimize for predictable returns, and allocate resources to initiatives that align with existing performance metrics and margin expectations. Over time, those very “good management” habits create a structural bias toward sustaining improvements (making today’s model better) and away from disruptive bets (new models that look smaller, messier, and less profitable at the start). Christensen’s point is not that leaders are irrational; it’s that the organization’s resource-allocation logic is rational for exploitation, and therefore systematically hostile to exploration .
That tension is not unique to product innovation. James March famously described organizational learning as a balancing act between exploration (search, experimentation, variation) and exploitation (refinement, efficiency, execution) . Exploration is uncertain and often “inefficient” in the short run; exploitation pays off quickly but can lock you into yesterday’s assumptions. In other words, if you over-invest in exploitation, you become very good at a world that is already disappearing; if you over-invest in exploration, you may never operationalize the basics. Peter Hinsen talks about the pitfall of managing the “shit of yesterday”. This is why the ambidexterity literature argues that long-term survival requires doing both at once, sometimes via separate structures, sometimes via carefully designed governance that protects exploration from being suffocated by the metrics of exploitation .
Cybersecurity in 2026 is living inside this exact dilemma. The market signals say “spend more”. Secureworld summarizes KPMG's findings that 99% of leaders plan to increase cybersecurity budgets and that many expect material growth, driven by AI-powered threats . Yet those same sources highlight a confidence gap: leaders worry most about AI-driven social engineering and targeted attacks, and a relatively small share rate their defenses as effective against them. This is Christensen’s pattern in a new costume: boards and executives will fund what they can see and govern, but the threat has shifted in ways that make the “old performance measures” (tool counts, compliance checkmarks, activity metrics) increasingly misleading
“There are decades where nothing happens. And then there are weeks where decades happen.” – Peter Hinsen
That’s where this article’s triangle, CIO, CFO, CISO, becomes the practical answer to the innovator’s dilemma in cyber. Each role is structurally rewarded for exploitation: the CIO for uptime, delivery, and architectural continuity; the CFO for predictability, controllable spend, and demonstrable ROI; the CISO for reducing incidents and satisfying assurance requirements. Those incentives are not wrong; they keep the enterprise running. But the same incentives can also make it hard to fund and operationalize the “new game” (identity fog from non-human identities, AI-enabled fraud/social engineering, post-quantum preparation), because those initiatives start uncertain, cross-domain, and rarely show immediate payback in traditional terms.
So the CIO–CFO–CISO relationship is an ambidexterity design problem. If the triad governs cyber only through the lens of exploitation, organizations will continue to accumulate complexity (tool sprawl, overlapping controls, fragmented data). They will struggle to convert rising budgets into real resilience. If the triad creates protected space for exploration, while insisting that exploration must still become engineered, validated capability, then strategies like Zero Trust, cyber risk quantification (as ranges and scenarios, not fake precision), and landscape rationalisation can actually deliver what boards want: lower exposure, lower operational drag, and more straightforward decision logic.