Essay 2

The Four Problems Your Brain Is Always Solving

A short tour of why the brain takes shortcuts — and what it costs you.

There is a Wikipedia article called List of cognitive biases. If you have a few hours, click on it. It contains around two hundred entries, alphabetized, in plain serif type, with the diligent flatness of a phone book. The hindsight bias is two clicks from the Ikea effect. The Dunning-Kruger effect is sandwiched between dread aversion and emotional reasoning. Reading the page is like meeting a stranger and being handed their entire family tree before they say their own name.

There is also a famous infographic, possibly tattooed on the inside of every behavioral-economics consultant’s eyelids, called the Cognitive Bias Codex. The same biases, this time arranged in a four-quadrant wheel, color-coded, dense, beautiful, completely unreadable in print. It hangs on office walls as decoration. I have a copy somewhere.

Both presentations have the same flaw. They present a long list of symptoms without telling you what the body is trying to do.

The brain is doing four things all the time. They are the real backbone of the long list, and once you see them, the biases on the wall stop looking like a parade of design flaws and start looking like what they actually are: predictable failures of four genuinely hard jobs the brain is doing every minute of every day — jobs it cannot opt out of, and would not want to.


The Codex you’ve seen on the wall was put together in 2016 by a writer named Buster Benson and illustrated by John Manoogian III. It catalogues 188 biases — not all 200+, just the well-documented ones — and groups them under four headings that name what the brain is up against.

Here are the four. Short labels first, then what each one actually means.

Too much information. Your senses are running constantly. Cars passing. Conversations on the next table. Phone notifications. The weight of your own clothes. If you actually paid attention to all of it, you would never finish a thought. So the brain filters, aggressively, by default, without asking permission. It keeps what’s new, what’s emotionally loud, what’s visually distinct, what confirms what you already believe. It throws the rest away. This filtering is the price of being able to function, and most of the time it’s a bargain. But it also means that the version of the world you experience has been pre-edited by your brain in ways you cannot see. When the filter rules misfire (when “loud” gets confused with “important,” or when “consistent with what I already think” gets confused with “true”), you get biases like the availability heuristic, confirmation bias, and the negativity bias. Same machinery. Different luck.

Not enough meaning. Even after the filter, what reaches you is incomplete: fragments, glimpses, partial conversations, raw numbers, somebody’s facial expression for half a second. The brain hates fragments. It will smoothly, automatically, irresistibly fill in the gaps with a story. It will assemble cause and effect, intention and personality, pattern and trend, often from a sample that wouldn’t justify any of them. The story-making is so fast and so frictionless that it doesn’t feel like a guess; it feels like seeing. Without it you would never make a decision in time. With it, you mistake your inference for the data, and you get biases like the halo effect, the clustering illusion, the gambler’s fallacy, and the famous one where you imagine someone’s whole personality from a one-sentence description.

Need to act fast. Even when the information is partial and the story is provisional, life doesn’t wait. You have to choose, often within seconds, often with bigger stakes than you’d like. The brain does this with a small toolbox of defaults: lean toward the option you’ve taken before; lean toward the option that doesn’t require justification if it goes wrong; lean toward staying put. These defaults are usually fine. They are sometimes catastrophic. Out of this category come overconfidence, optimism bias, the planning fallacy (why your renovation will take three times longer than you think), the sunk cost fallacy (why you keep watching the bad movie because you already paid for the ticket), the status quo bias, and the action bias (the urge to do something when doing nothing would have been better) and its mirror, omission bias (preferring harm caused by your inaction to equivalent harm caused by your action).

What to remember. And finally, the brain has to decide what to store. Memory is expensive. The brain compresses, prioritizes, summarizes, occasionally invents. It is much better at the gist of events than the details. It is much better at remembering what you cared about than what was true. And then, when you go back to consult the memory later, the brain does not faithfully retrieve a recording; it reconstructs the event from the gist, often with the present’s preferences sprinkled in. Out of this category come hindsight bias (why you “knew it all along” about events you didn’t actually predict), rosy retrospection (why every old job and every past relationship slowly becomes either better or worse than it really was), and the misinformation effect (why eyewitness testimony is much less reliable than courts assume).

Filter, fill, choose, store. Those are the four jobs. Every named bias on the wall, every one of the entries on the phone-book list, is a failure mode of one of those four jobs, or sometimes of the interaction between two of them.


The same biases can also be cut a different way, and the second cut is the one most widely used in organizational training and decision-coaching work. It comes out of the NeuroLeadership Institute, founded by David Rock, who together with his collaborators wanted something simpler than a 188-entry catalogue for the kinds of decisions companies make every week. They built a model called SEEDS, and it organizes biases not by the problem they’re solving (filter, fill, choose, store) but by the cause that’s activating them.

Five families, each with a one-line motto worth remembering:

  • Similarity. “What’s like me is better.” You favor people who look like you, talk like you, went to your school. You evaluate ideas more leniently when they come from someone you identify with. You read the same news sources because they sound like you. The evolutionary logic is clear; the consequences in a modern diverse organization are also clear.
  • Expedience. “If it feels easy, it must be true.” The brain treats fluency, meaning how easily an answer comes to mind, as a proxy for accuracy. So familiar things feel right. Things you’ve heard a few times feel true. The first answer you reach for is more believable than a better answer that took more work to think of.
  • Experience. “My view is the view.” You assume your perception of a situation is roughly what’s actually happening, that other reasonable people would see it the same way, and that disagreement is mostly about not having the same information. None of these is reliably true.
  • Distance. “Closer is more important.” In time: the near future weighs more than the far future, even when it shouldn’t. In space: the local outweighs the distant. In identity: people you know personally weigh more than strangers in the same situation. This is why retirement saving is hard and why we underreact to crises that are real but slow.
  • Safety. “Bad is stronger than good.” A loss hurts about twice as much as an equivalent gain feels good. Bad news goes deeper than good news. Threats command attention faster than opportunities. This is the evolutionary inheritance from environments where missing a threat killed you and missing an opportunity didn’t.

The SEEDS view doesn’t replace the four-problem view. It complements it. The four-problem view is best when you want to understand why the brain takes a shortcut; the SEEDS view is best when you want to know what to do about it in a specific moment. If you’re aware of feeling pressed for time and your snap judgment feels suspiciously easy, you’re probably in Expedience territory, and there’s a specific corrective for that: slow down, defer the decision, do not let fluency carry the day. If you notice yourself disagreeing more sharply with someone you don’t identify with, you’re probably in Similarity territory, and the corrective is different: look for the strongest version of their argument before you respond.

We’ll come back to the SEEDS-specific correctives in the next essay, on what actually helps you think better. For now the point is that the long list of biases is not chaos. It has at least two coherent structures inside it.


A practical aside on counting. You will see different numbers in different places when biases come up, and the discrepancy is small enough to seem like sloppiness but consistent enough to be a question worth answering.

The number 188 comes from Buster Benson’s 2016 Cognitive Bias Codex specifically: the four-problem chart you may have seen on a wall, and the same set used in the catalogue embedded below. That is a specific framework counting a specific set of well-documented biases.

The number roughly 200 is the figure in the broader psychology literature: somewhere between 180 and 220 entries, depending on how strictly you define “well-documented.” The fuzz around the edges is real. Some biases are robust findings replicated in dozens of studies, others rest on a single study that the field has not yet stress-tested, and still others appear under multiple names. Any honest list has to make some judgment calls.

The number 326 comes from a different framework altogether, called the CX Codex for Cognitive Bias and Heuristics, which is widely used in user-experience design. It is the most generous count of the three. Part of the difference is that the CX Codex lumps biases (systematic errors in output) and heuristics (the cognitive processes that produce those outputs) into one list. The two are conceptually distinct: a heuristic is a mechanism; a bias is the error mode that mechanism sometimes produces. Counting them together is convenient for practical work but inflates the apparent zoo of separate phenomena by combining ingredients with dishes.

None of the three numbers is wrong. They are different things, counted under different rules, for different purposes.


Below is the catalogue itself, made browsable. Search by name, filter by SEEDS family or by which of the four problems the bias addresses, or click any entry to read its full mechanism, its landmark study, its real-world consequences, and the most closely related biases (which usually share a family). Don’t try to read the whole thing. The point of the tool is not memorization. It’s that the next time you find yourself in a situation where you’d like to ask, what bias might be at work here? — you have somewhere honest to look.

Each entry shows the mechanism, the landmark study that established it, the real-world consequences, and the related biases that share its family. Click a bias to expand; click again to collapse. Open the microsite full-screen →

And if you want the five-family view in one piece, here it is — the SEEDS wheel with each family’s mitigations attached. This one rewards a slow read more than the catalogue does:

Open the microsite full-screen →

Maps don’t drive cars. The next essay is about what to actually do with all of this, which turns out to be a more honest and more interesting question than most popular writing makes it out to be. The short version: educating yourself about your biases helps less than you’d hope, the structural fixes work better than the educational ones, and the difference between knowing you have a bias and not having it is one of the most replicated findings in this whole field. We’ll meet that finding head on.