
The efficient market hypothesis is a theodicy — an argument that the world should be perfect because smart optimizers are exploiting every opportunity. If a $20 bill is sitting on the sidewalk in Grand Central Station, a thousand greedy people should have picked it up already. If a trillion-dollar bill in the form of better monetary policy is sitting on the Bank of Japan’s sidewalk, economists should have picked it up already. And yet the bill sat there for a decade.
Simple Picture
Everyone hates Facebook. It records your data, manipulates your timeline, maximizes addiction. So why does everyone keep using it? Because all your friends are on it. You want to be where your friends are. Nobody expects their friends to leave, so nobody leaves. Even if every single user hated it, nobody would have the common knowledge needed to coordinate a mass exodus. Everyone knows. Everyone stays. This is a bad Nash equilibrium — individually rational, collectively catastrophic.
The Three Ways Evil Enters the World
1. No Incentive to Correct
The people who notice a mistake have no way to benefit from correcting it. Japanese stocks were priced in ways that suggested most investors realized the Bank of Japan’s monetary policy was wrong. The smart money had already figured it out. But central bankers’ incentives are about prestige, not outcomes. Low money supply (the wrong policy) is considered virtuous and responsible. High money supply (the right policy) makes other central bankers laugh at you. So even as evidence accumulated, the bankers looked at their payoff matrix and chose wrong — because wrong was safe.
It should be horrifying that this system weights a small change in the reputation of a few people higher than adding trillions of dollars to the economy, but that’s how the system is structured.
The Gervais Principle explains the social dynamics: the Sociopaths who understand the system cannot profit from fixing it (no way to short central bank policy). The Clueless sincerely believe the current approach is correct. The Losers know the system is broken but lack power to change anything. The feedback pipe from knowledge to action is completely severed.
2. Expert Knowledge Cannot Trickle Down
Key decision-makers lack the information they need. The people who have the information cannot credibly convey it. This is the Lemon Market problem: if used car sellers know the quality of their cars but buyers do not, buyers discount all cars. Honest sellers leave the market. The average quality drops. Buyers discount further. The market degrades even though the information to prevent degradation exists — it just cannot be credibly transmitted.
The expert-novice impasse is a special case: the expert’s knowledge is structurally incommunicable to the person who needs it. The priesthood dynamic adds institutional inertia: the experts who get things right are often the ones the system is designed to ignore, because the system’s own criteria for credibility filter out the very insights it needs.
Doctors optimized for light boxes to treat seasonal depression when vastly brighter jury-rigged lamps worked better. The FDA’s approved parenteral nutrition for babies uses the wrong lipids while the correct, cheap, widely-known formula is blocked by regulatory inertia. Medicine left a 20-quality-adjusted-life-year bill on the sidewalk. The knowledge existed. The pathway from knowledge to action did not. The replication crisis is the same structure applied to the research pipeline itself: underpowered studies pass the publication filter, null results vanish, and the incentive to produce surprising findings overwhelms the incentive to produce true ones.
3. Bad Nash Equilibria
A bad Nash equilibrium is a local optimum that nobody can escape unilaterally. Education is the clearest example:
Imagine a magical tower that only people above a certain capability can enter, and it costs four years of life. Employers prefer tower-entrants, so everyone wants to enter the tower. Somebody builds a fence and charges hundreds of thousands of dollars. The smartest people go to Tower One (Harvard), so employers pay Tower One graduates more, so the smartest people keep going to Tower One. The system is stable because each individual’s best move locks in the collective worst outcome.
This is premium-mediocrity at the civilizational scale: the signal (degree) is more expensive than the skill it supposedly certifies, but unilateral defection (skipping college) is punished more than collective compliance (everyone going). The Hated Equilibrium is the named version: most everyone is unhappy, but no one can unilaterally improve things, and coordination to change simultaneously is too hard.
Inside View vs Outside View
The outside view looks at your situation statistically — what happened to the reference class of similar cases? If you are starting a restaurant, most restaurants fail. The outside view says you will probably fail too.
The inside view focuses on the specifics — your location, your chef, your strategy. Maybe your case is an exception.
The will to think demands the inside view: refuse to accept aggregate answers without understanding the specific mechanism. But the outside view checks against the greed-fear cycle: every founder believes they are the exception, and most of them are wrong. The skill is using the outside view as a check on overconfidence while using the inside view to identify the specific mechanism that makes your case different — if it actually is.
Dimwit / Midwit / Better Take
The dimwit take is “the world is run by idiots — if smart people were in charge things would be better.”
The midwit take is “markets are efficient and institutions are rational — if something looks broken, you must be missing something.”
The better take is that inadequate equilibria are the default state of complex systems, not the exception. The world is full of $20 bills on the sidewalk — not because nobody is smart enough to pick them up, but because the incentive structures make picking them up costly, the knowledge needed is stuck behind credibility barriers, or the equilibrium is self-reinforcing in ways that no individual can break. The question is never “why hasn’t someone fixed this?” It is “what specific structural feature prevents the fix from happening?” — and the answer is almost always one of the three ways.
Main Payoff
The deepest implication: knowing the right answer is not the bottleneck. In most inadequate equilibria, the right answer is already known. The Bank of Japan knew. The stock market knew. The doctors knew. The bottleneck is the pathway from knowing to doing — which is blocked by incentives, credibility barriers, or coordination failures. The person who wants to improve the world should not start by figuring out the right answer. They should start by mapping the specific inadequacy that prevents the already-known right answer from being implemented. The fix is rarely more knowledge. It is almost always a structural intervention in the incentive landscape.
References:
- Eliezer Yudkowsky, Inadequate Equilibria: Where and How Civilizations Get Stuck
- Scott Alexander, Book Review: Inadequate Equilibria, Slate Star Codex