When Intelligence Scales Stupidity
- Lars Nordenlund
- Mar 6
- 8 min read
Updated: Mar 18

We built systems to think faster. We forgot to keep thinking.
I've spent two decades inside strategic transformations. Boardrooms across industries, geographies, and company stages. And a pattern I keep seeing concerns me more than the market shift or competitive threat — it is the defining leadership challenge of the next decade.
We are living through the most significant expansion of intelligence in human history. Perception, prediction, and decision support — continuously available, at scale, to every organization on the planet. The bottleneck that once defined competitive advantage — access to information, speed of analysis, quality of synthesis — has effectively disappeared.
Picture a war room. Twelve senior leaders around a dashboard. The model is green. The forecast is confident. The recommendation is clear. Everyone has reviewed the output. Not one person owns the call.
That is not a hypothetical. That is not a hypothetical. I have seen that scene — in boardrooms, trading floors, and strategy sessions across three continents. More intelligence. Fewer decisions that hold. Not ignorance — something more precise and more dangerous: decisions that look right on a slide and break the moment they meet reality.
AI makes first-pass thinking abundant. But abundance diffuses responsibility. "I think and those" quietly becomes "the model says." That's where the leadership decision-making breaks. As intelligence becomes abundant, judgment and ownership become scarce — and stupidity grows in the gaps.
More data. Less meaning. Not ignorance. Not incompetence. Something more precise and more dangerous: hollow decision-making without real ownership. Weak commitment undermines decisions because a new reality is repeatedly presented. It reads like insight on a slide and breaks the moment it meets reality.
This is the Intelligence Paradox, and the data support it. In 2018 — even before generative AI existed — McKinsey found that only one in five organizations excel at decision-making, costing a typical Fortune 500 $250 million in wasted labor every year. Then AI arrived. HBR studied 300 executives making a high-stakes forecast — half used ChatGPT, half consulted peers. The ChatGPT group got more confident. And made worse decisions. Not because AI gave wrong answers — but because it gave authoritative ones. The detail and certainty shut down the skepticism that good decisions require. Peers push back. AI doesn't, and it is quietly breaking strategic leadership and the coherence to execute effective results.
When coherence breaks, the organization loses its ability to lead disruption. Not because it can’t analyze, but because it can’t agree on what matters, decide what to do, and move together. The organization gets lost in its own decision maze — measuring, monitoring, reporting — while the gap to market reality compounds. Continuous reinvention degrades into continuous debate.
In this loop, there is high adoption but low conversion. The gap between intelligence deployed and intelligence that actually changes anything is widening, not closing.
Most organizations respond to this gap by deploying more intelligence. More models. More dashboards. More synthesis. More recommendations. What they rarely ask is whether the problem is the intelligence or what happens to judgment when intelligence becomes cheap.
The Age of Intelligence breeds stupidity at scale
AI should make you faster and smarter. In practice, it often makes you slower — unless you change how decisions get made and owned in continuous reinvention. The stupidity is structural.
Organizational stupidity is the condition where an enterprise produces more answers and less change. More synthesis, more insight, more recommendations — and fewer decisions that hold, fewer trade-offs that bite, fewer moves that alter the curve.
Scholars call a version of this "functional stupidity": organizations that continue to operate, even thrive in the short term, while suppressing critical reflection. The process becomes the strategy. The theater of intelligence conceals its absence.
Then confusion sets in: information becomes 'truth,' consensus becomes 'strategy,' and motion becomes 'progress.'
When AI inundates the enterprise with predictions and outputs, attention fragments. Accountability fades. Decisions get pushed into process, committees, or models — until no one truly owns the call. And the system, optimized for throughput and plausible answers, keeps generating more of both.
What makes this failure mode specifically modern is that AI doesn't just increase the amount of information. It increases plausible answers. When answers are cheap, abundant, and well-phrased, the enterprise becomes vulnerable to a new kind of drift: rigor without commitment. People start confusing a good explanation with a decision, and a clean narrative with a funded bet.
Three Mechanisms That Make It Worse
Cognitive science gives us precise language for why this happens. Three compounding mechanisms — and I have watched all three play out in real leadership teams.
1. Cognitive Offloading
When the environment offers a shortcut, we take it. That is not a character flaw — it is how human cognition works under pressure. The problem is that generative AI has moved beyond retrieval into reasoning. The model doesn't just fetch information anymore. It interprets. It synthesizes. It recommends.
What I see in practice: leadership teams that once debated trade-offs now debate outputs. The hard work of compressing complexity into a single owned choice gets quietly handed to the system. The team gets better at generating options — and progressively worse at eliminating them.
2. Automation Bias
People systematically overweight machine recommendations. They accept outputs they would have challenged if a human had said the same thing. They miss what the system doesn't flag — because if the model didn't surface it, it must not matter.
Research confirms this consistently across decision-support contexts. But you don't need a study. Watch what happens in the next meeting when the AI recommendation conflicts with someone's gut read. Nine times out of ten, the room defers to the model. Not because the model is right, but because it sounds confident, and nobody wants to own the counter-argument.
3. Reduced Critical Engagement Under Trust
Microsoft researchers surveyed 319 knowledge workers across nearly 1,000 real-world GenAI interactions. The finding was consistent: as confidence in AI rises, critical thinking and independent problem-solving drop. The user shifts from doing the work to supervising and editing the work.
In isolation, that shift is manageable. In a leadership system already prone to consensus governance and deferred trade-offs, it is an accelerant. I have sat in rooms where no one pushed back on a recommendation — not because they agreed, but because the model had already done the thinking and nobody wanted to redo it. The organization gets faster at producing answers. And slower at making calls.
Put those three together, and you get the core failure: intelligence abundance amplifies decision avoidance. When everyone can generate a compelling case in minutes, the enterprise fills the room with options — and still can't say no. That is organizational stupidity: not the absence of knowledge, but the inability to convert knowledge into committed direction.
The Two Paradoxes That Compound
The stupidity effect doesn't land evenly. It concentrates in two places where most organizations are already most vulnerable.
The Strategic Planning Paradox
AI shows you what's changing in real time. Your budget cycle decides what you can do about it — once a year if you are following the usual planning cycle. The result is structural lag: the company sees faster and moves at yesterday's speed. It executes a plan the market has already invalidated — with higher confidence than ever, because the dashboard says so.
The Innovator's Paradox
AI makes experimentation cheap. The organization can run more pilots than ever. And still ship very little that changes the business model. The adoption-to-impact gap described by MIT-affiliated researchers is exactly this: deployment theater without measurable value creation. Lots of activity. Very little change.
Both paradoxes share the same root: when intelligence expands, imagination and accountability often contract — and the enterprise mistakes motion for progress.
Watering the Crops with Gatorade
Mike Judge's film Idiocracy depicts a future society that has literally started irrigating its crops with a sports drink — because it's got electrolytes, and electrolytes are what plants crave. The joke is that no one stops to ask whether it actually works. The ritual is so established, the belief so widespread, that the question itself doesn't get asked.
Many organizations are doing the corporate equivalent.
They feed the system what feels good in the moment: vanity metrics, consensus narratives, executive theater, slideware — rather than the plain water of evidence, accountability, and hard trade-offs.
The incentive structures select for it. When the fastest path to advancement is keeping your head down, people keep their heads down. If you promote political safety over problem-solving, and reward explaining over delivering, you get better stories, not better outcomes. If mistakes are punished, signals get buried, and the organization loses its ability to self-correct.
Individual talent can be high while organizational intelligence declines — because the operating logic suppresses dissent, confuses confidence for correctness, and increasingly outsources judgment to templates and model outputs.
More data. Weaker decisions. Slower execution.
What Genuine Intelligence Actually Requires
In an age where fake cognition is cheap and abundant, real intelligence and human judgment become rare and priceless assets.
Here is what I keep coming back to after twenty years in this. The companies pulling ahead are not the ones with the most models or the biggest data teams. They are the ones where someone still owns the call.
They will be the ones who can tell the difference between data and judgment, between synthesis and decision, between mimicry and meaning. Those who have built Strategic Fitness — the system to sense change early, select decisively, and execute with coherence.
Three disciplines define that difference.
1. Ownership Over Insight
Every recommendation produced by an intelligent system must have a human owner — someone who accepts accountability for what happens next. Not a committee. Not a process. A person. "The model recommended it" is not a decision. It is the starting point for one.
2. Imagination Over Precedent
AI systems are exceptionally good at identifying patterns in historical data. They cannot envision futures that do not yet exist. The executives who will navigate the next decade well are not those who can extrapolate the past with greater precision — they are those who can imagine what hasn't been seen before, stress-test it through scenarios, and make explicit bets on it.
Every AI system I have worked with has been exceptional at detecting the past. It pattern-matches against what happened. What it cannot do is imagine what hasn't happened yet — and bet on it. That is still a human job.
3. Judgment Over Dashboards
A dashboard shows you what has happened. It cannot tell you what it means, what to trade off, or whether the answer the model generated is actually true. That requires judgment — the capacity to weigh incomplete evidence, make a call, and hold it accountable to outcomes.
Judgment is not opposed to data. It is what data requires in order to become a decision.
The winners won't be the companies with the best dashboards. They'll be the ones who never let a dashboard make the call.
The Intelligence Paradox is not just a warning. It is a filter.
It separates the organizations that will mistake noise for knowledge from those that will translate intelligence into real advantage. It separates leaders who confuse answers with decisions from those who understand that the hardest work — the selection, the trade-off, the commitment — has not been automated and will not be.
Right now, in most organizations, a meeting is happening. A model has produced a clean recommendation. The slides look good. The committee is nodding. And somewhere in the room, no one is asking the only question that matters:
Who in this room actually owns this call? If no one raises their hand — that is the moment stupidity scales.
Is your company fit for the Age of Intelligence?
Scaling intelligence without Strategic Fitness doesn't accelerate advantage. It accelerates the wrong decisions.
Take the 3-minute Strategic Fitness Diagnostic. Six questions. Instant read on where your company stands — and where the gaps are before they become liabilities.
→ Take the Diagnostic — nordenlund.com/#strategic-fitness-diagnostic
I built this diagnostic because I kept having the same conversation in boardrooms: leaders who could see the disruption, but had no framework to act on it. If that resonates — take the diagnostic first. The score usually speaks for itself.
— Lars
About This Series
This article is the first in a series on intelligence, judgment, and leadership strategy in the Age of AI. Future pieces will explore the Innovator's Paradox, the anatomy of a real strategic decision, and how organizations can build the muscle for genuine judgment at scale.
If this resonated, follow for the next piece in the series.
Also published on Medium, Intelligence & Strategy https://medium.com/survival-of-the-strategic-fittest



Comments