Faster Into Irrelevance
We do not lack tools, talent, or frameworks. Over the past decade, engineering matured, platforms stabilised, and access to technology became almost frictionless. Yet outcomes refuse to move in proportion.
~90% of startups fail (CB Insights; Startup Genome). ~70% of transformations fail (McKinsey; BCG reports similar ranges). ~35–45% of features never get used (Standish Group; Pendo; Productboard). A large share of scale-ups stall after initial traction (ScaleUp Institute; OECD).
These numbers barely move. If the problem sat in technology or execution capability, progress would already show. It does not. So the question shifts: what layer consistently fails across all these contexts?
Sources indicative; ranges vary by study and definition. See: CB Insights “Top Reasons Startups Fail”; Startup Genome “Global Startup Ecosystem Report”; McKinsey “Why Transformations Fail”; BCG transformation reports; Standish CHAOS Report; Pendo/Productboard analyses; ScaleUp Institute and OECD research.
A Light Friday Observation
If everything improved, why did nothing change?
If you step back, a strange pattern appears.
We built better tools, hired better talent, and invented better processes. Then we added AI on top, just in case the first three were not enough. The system should have accelerated.
Instead, it learned how to stay busy more efficiently, and call it progress.
Nothing dramatic. No collapse. Just a quiet plateau where everything moves and very little changes. Like a treadmill with a quarterly business review.
The Pattern Hiding in Plain Sight
Failure concentrates in the same places: product–market fit, execution, transformation. None of these originate in the codebase. They emerge in direction, prioritisation, and decision-making, layers that sit above engineering.
Yet most responses still target the lower layers. New tools, new rituals, new dashboards. Occasionally, a bold announcement about AI fixing productivity.
It feels like upgrading the engine while politely ignoring the fact that the map might be wrong. Faster into the wrong direction still counts as progress on some dashboards.
The Real Constraint
If stronger tooling and better talent do not shift outcomes, one layer remains structurally constant: the executive layer.
Not as individuals, but as a system. The kind that produces very convincing PowerPoint, and occasionally, accidental results.
Organisations tend to reward executives who maintain confidence, reduce perceived risk, and keep the story coherent. These skills matter. They become problematic only when they dominate everything else.
Because at that point, the system starts optimising for how things look rather than how they work.
This creates a fractal disconnection.
At the top, strategy drifts away from operational reality. One level down, priorities lose contact with constraints. Further down, teams optimise locally without a clear link to customer value.
The pattern repeats at every layer: distance from the work increases, confidence remains high, and signal degrades.
Not through intent, but through structure.
Incentives reinforce it. Senior compensation and evaluation often tie to narrative stability, short-term optics, and risk management more than to long-term system performance.
When the rewards align with looking right rather than being right, the drift sustains itself.
When Narrative Becomes the Product
Over time, a subtle shift occurs. Truth competes with narrative. Trade-offs soften into positioning. Constraints get replaced by ambition.
Roadmaps begin to reflect negotiation more than strategy. Priorities move just enough to avoid tension. Initiatives launch with enthusiasm and retire quietly.
Nothing looks obviously wrong. Everything even sounds reasonable. That is usually when it becomes expensive.
That is precisely the problem.
Why Nothing Really Changes
If outcomes drift, why does the system not correct itself?
Because it measures the wrong things.
Activity looks healthy. Velocity looks strong. Adoption of tools looks promising. The narrative remains coherent.
Hard outcomes arrive later, diluted across time and teams, conveniently after the next reorganisation, where we fix the structure without touching the problem.
This is Goodhart’s law with a modern accent. Once a proxy becomes a target, it stops measuring what matters.
- “AI productivity”
- “100% code generation”
- “Feature velocity”
The classic feature factory mindset.
All useful in context, but disconnected from reality when promoted to strategy. They optimise output, not service or solutions. What actually matters is a deep, professional focus on solving real customer problems, not producing more features.
Concrete Patterns You Already Recognise
A few examples, not as anecdotes but as recurring patterns:
A company invests heavily in AI to increase engineering productivity. Delivery accelerates, yet customer outcomes barely move because the underlying roadmap lacks clarity and signal. Several public cases hint at this pattern: companies like Duolingo have aggressively pushed AI into content generation and learning flows, increasing output and experimentation velocity, while still needing to prove sustained impact on learning outcomes and retention.
More broadly, multiple large tech companies have reported rapidly increasing AI spend (not always matched by proportional business impact), illustrating how easy it is to scale tooling investment ahead of validated constraints.
A scale-up doubles its teams to go faster. Dependencies multiply, coordination overhead grows, and time-to-value stretches instead of shrinking.
A product organisation ships dozens of features per quarter. Usage data later shows that a large portion never gets adopted, while core user friction remains unresolved.
A transformation programme introduces new rituals, dashboards, and governance layers. Reporting improves, but decision latency increases and teams lose autonomy.
Different contexts. Same structure. Different logos, same meeting notes.
The system optimises for visible progress while the real constraint remains untouched.
The Quiet Cost
When the constraint sits in the executive layer, improvements below it bring diminishing returns.
Engineering delivers faster, often on the wrong problems. Product ships more, sometimes with weak signal. AI increases output without shifting outcomes. In the end, we just deliver irrelevance more efficiently, at scale.
Costs rise. Complexity accumulates. The gap between effort and impact widens.
Most organisations do not fail loudly. They stabilise into a state of permanent motion.
Busy, capable, and slightly underwhelming. High energy, low consequence.
At some point, the system stops trying to solve problems. It just gets very good at explaining why they persist.
What Would Genuinely Help
No dramatic revolution required.
Just a shift in what gets optimised.
Make constraints explicit. Accept trade-offs instead of smoothing them out. Anchor strategy in measurable problems. Move decisions closer to where context lives.
And, occasionally, prefer an uncomfortable truth over a well-crafted narrative. It rarely trends, but it tends to work.
A Gentle Conclusion for a Friday
These failure rates do not come from bad luck or lack of effort. They reflect a system that quietly optimises for coherence and stability over correctness.
The system works.
Just not for the outcome it claims.
Member discussion