
Most bad software is not malicious. It is software shaped by the preferences of the people building it rather than the needs of the people using it. When implementation becomes the de facto product strategy, the inmates are running the asylum.
Simple Picture
A violin is difficult, but its difficulty is in the music. The instrument remains coherent. A badly designed microwave is difficult in a different way: it forces you to manage modes, remember sequences, and think like the machine. The first kind of difficulty builds skill. The second kind makes the user feel stupid.
That distinction is Cooper’s core insight: the problem with software is not that it is powerful, but that it often imposes unnecessary cognitive friction on the user.
Cognitive Friction Creates False Elites
High-friction software splits users into two camps: the people humiliated by it and the people proud of mastering it. Cooper calls them survivors and apologists. The apologist mistake is especially dangerous because it turns endurance into a status marker. Once mastery of a bad system becomes a source of identity, criticism sounds like an attack on competence.
This is the Expert Beginner problem in product form. The people most comfortable with a system are often the least qualified to judge whether it should work that way. Their fluency hides the tax being imposed on everyone else. And because feeling stupid is socially costly, many users will blame themselves before they blame the product, which corrupts the feedback pipe the same way threatened self-esteem corrupts feedback inside organizations.
Design for Goals, Not Tasks
Cooper’s alternative is goal-directed design. Do not design for an elastic abstraction called “the user.” Design for specific people trying to achieve specific things without violating their personal goals. A user does not just want to complete a task. They also want not to feel stupid, trapped, blamed, or micromanaged by the machine.
This sharpens problem definition. The task a user performs will change as tools change. The goal usually remains stable. If you design around today’s task list, you produce interfaces that fossilize current mechanics. If you design around the goal, you can change the mechanics without breaking the experience.
Personas matter here not as corporate ritual but as a way of collapsing ambiguity. A product aimed at everyone will usually satisfy no one. The same logic shows up in constraint thinking: once the target is vague, every local optimization looks defensible, and the system fills with features that nobody can honestly justify.
Feature Lists Are Not Product Strategy
One of the book’s most durable arguments is that software teams mistake a bag of features for a product description. That is how projects drift into deadline theater: the release date becomes real because it is on the calendar, while the actual standard of finishedness remains undefined. Whatever exists by the date ships.
This is the wrong-order problem applied to product work. Teams accelerate, iterate, and implement before anyone has established what should exist in the first place. Once code is poured, design becomes advisory at best. The organization tells designers they can improve the interface after engineering is done, which is equivalent to offering architecture after the concrete has set.
The same failure mode sits underneath Peopleware and Becoming a Technical Leader: management treats quality as something downstream from schedule and implementation, then acts surprised when builders optimize for what the institution actually rewards. If nobody owns a coherent product vision with enough authority to constrain engineering, engineering will define the product by default.
Humane Systems Trust the User
Cooper’s idea of fudgability is deeper than mere convenience. Humans need room for recoverable mistakes, partial states, and out-of-order action. Systems that forbid this are not being rigorous; they are casting the human as an adversary. The result is perverse: when the system refuses small temporary errors, users stop caring about preventing large permanent ones.
That is why “painting the corpse” fails. Better visuals, style guides, and usability sanding cannot rescue an interaction model that is fundamentally hostile to the way humans think. A humane system is legible, forgiving, and respectful. It gives the user enough slack to stay oriented and enough structure to keep moving.
Dimwit / Midwit / Better Take
The dimwit take is “users just need more training.”
The midwit take is “we can iterate toward a good product by shipping features and sandpapering the rough edges with usability testing.”
The better take is that software becomes dehumanizing when the implementers are allowed to define the product from the inside out. You do not fix that with more polish or more documentation. You fix it by starting from a precise model of the user’s goals, giving design real authority before code is written, and treating cognitive friction as a product failure rather than a user failure.
Main Payoff
The title sounds rhetorical, but it names a real organizational pattern: the people closest to implementation accumulate veto power over everyone upstream, until what can be built comfortably becomes indistinguishable from what should be built. Scaled across a VC-backed hardware ecosystem rather than a single team, this exact mechanism produces Accidental Chindogu: when nobody with authority over a coherent user intent is in the room, what can be funded and shipped — internet-connected salt shakers, $400 juice machines, subscription firmware — becomes indistinguishable from what should exist. The cure is not anti-engineering. It is giving product and interaction design enough clarity and authority that engineering can serve a coherent intent instead of improvising one.
References:
- Alan Cooper, The Inmates Are Running the Asylum