
When a measure becomes a target, it ceases to be a good measure. This is Goodhart’s Law — the observation that tying decisions and incentives to a metric changes the behavior the metric was supposed to passively observe. The metric and the underlying reality it was meant to represent begin to diverge, and the divergence is invisible to anyone managing by the numbers.
Simple Picture
A school measures teacher quality by student test scores. Teachers stop teaching understanding and start teaching the test. Test scores go up. Actual learning goes down. The metric says the school is improving while the school gets worse. The metric is not wrong — it is measuring what it measures. The problem is that what it measures is no longer what it was supposed to represent.
The Reification Trap
Metrics are simplified representations of complex systems. They allow decisions about systems too complex to manage through intuition alone. This is genuinely useful — metrics can improve on gut feeling and add transparency where none existed.
The trap is reification: treating the metric as the real objective rather than an imperfect proxy. Once a metric is reified, people optimize for the metric rather than for the underlying goal. The metric was created as a window into reality; reification turns the window into a mirror that reflects only what people want to see.
This is legibility at its most dangerous. Every metric makes some aspect of the system visible. Making it visible makes it controllable. Making it controllable makes it a target. Making it a target changes what people do. What they do changes what the metric measures. The cycle is self-reinforcing: the more you rely on the metric, the more people optimize for it, the less it represents reality, the more you need to rely on it because you have no other source of information.
The Systems Bible names the structural version: people in systems do not do what the system says they are doing. Goodhart’s Law explains why — the stated purpose (measured by the metric) and the actual behavior (shaped by incentives) are always diverging, and the divergence accelerates when the metric gains power.
Why the Divergence Is Invisible
Three forces hide the damage:
1. The metric improves. If test scores go up, how do you argue the school is getting worse? The metric says you are winning. Arguing against it requires either a different metric (which faces the same problem) or direct observation of reality (which the metric was supposed to replace).
2. The causal structure is lost. Metrics flatten the complex causal web of the system into a single number. Anyone managing by the number cannot see which causal pathways are being warped by the incentive. The Theory of Constraints reveals a specific version: local improvements everywhere can make the system worse, but the metrics for each local unit look great.
3. The metric holders benefit. The people whose performance is measured by the metric have every incentive to optimize for it and no incentive to report its divergence from reality. The Gervais Principle explains the social dynamics: the Clueless sincerely believe the metric reflects reality, the Losers know it does not but lack the power to change anything, and the Sociopaths designed the metric to produce exactly this outcome.
Metrics as Substitutes for Trust
Metrics exist because someone does not trust someone else’s judgment. The hospital measures outcomes because the administrators do not trust the doctors. The school measures test scores because the district does not trust the teachers. The corporation measures KPIs because the board does not trust the executives.
The fog of work is the architectural result: Big Projects make individual contribution invisible, so metrics become the only way to evaluate people — and the metrics immediately become targets that diverge from actual value. This is the same dynamic that drives violent transparency: one-sided legibility that makes the observed visible to the observer without reciprocity. The feedback pipe narrows: the metric replaces the conversation that would have surfaced the nuance the metric cannot capture. Over time, the metric is the relationship — and the relationship is hollow.
The deeper you rely on a metric, the more you are admitting that you have substituted measurement for understanding. This is sometimes necessary — you cannot understand every system you manage. But the admission should produce humility, not confidence. The person who manages by numbers and sleeps well has confused the map for the territory.
Dimwit / Midwit / Better Take
The dimwit take is “metrics are bad — just trust people.”
The midwit take is “we need better metrics — more sophisticated measurement will solve the problem.”
The better take is that the problem is not in the quality of the metric but in the act of targeting it. Better metrics face the same law. The moment a sophisticated metric becomes a target, people will optimize for it just as eagerly as they optimized for the crude one — they will just do it more cleverly. Metric design is crucial and some misalignment is unavoidable, but the most important defense is never forgetting that the metric is a proxy, not the thing itself. The engineering algorithm applies: before optimizing a metric, ask whether the metric should exist at all. Before improving measurement, ask whether understanding is available through a cheaper channel — like talking to the people doing the work.
Main Payoff
Goodhart’s Law is the meta-principle beneath half the organizational dysfunction in the garden. The Expert Beginner optimizes for the metric that validates their plateau. Sabotage works because it optimizes the procedures that were designed as metrics of diligence. Scrum velocity replaces engineering judgment with ticket throughput. The stay-busy imperative treats utilization as a metric of productivity when it is actually a metric of waste.
Every metric is a story about what matters. Goodhart’s Law says that the story changes the reality it describes — and the change is always in the direction of making the story look true while the reality underneath drifts somewhere the story cannot see. The personal version: bad decisions arise from long unexamined proxy optimizations. You chose a proxy for what you wanted years ago, never revisited it, and now the proxy is running your life while the thing it was supposed to represent has quietly changed. If you do not address the sources of your problems, you will end up working for someone who did.
References:
- Venkatesh Rao, Goodhart’s Law and Why Measurement Is Hard, Ribbonfarm