Back to blog

How People Actually Make Decisions (And Why We Almost Always Get It Wrong)

Most of us think we decide with logic. The research says otherwise — and the gap between how you think you decide and how you actually decide is where most bad choices hide. Here's the playbook for closing it, plus a decision journal template you can start using today.

T
TWEGS
Notes on mental models, systems, and decisions

Ask anyone how they made an important decision — choosing a job, ending a relationship, buying a house, picking a school — and you'll get a clean story. I weighed the pros and cons. I thought about it for weeks. I made a list. I trusted my gut, but only after running the numbers.

The research says almost none of that is true. Or rather: the deliberation was real, but it was mostly theatre. The decision had already been made.

The gap between how we think we decide and how we actually decide is the single largest source of preventable error in human life. Bad hires, doomed relationships, money lost on confident bets, careers chosen for reasons we can't quite articulate — most of these weren't information failures. They were decision-process failures. The data was usually there. The thinking around the data wasn't.

This post has two parts. First, a clear-eyed look at how human decision-making actually works — drawing from Kahneman, Klein, Duke, and Munger. Second, a structured decision journal you can start using today to catch your own bad calls before they ship.

The two systems running your brain

The cleanest model of how people decide comes from Daniel Kahneman's Thinking, Fast and Slow. Two systems run in parallel:

System 1 is fast, automatic, and emotional. It pattern-matches against everything you've ever experienced and produces a decision before you're consciously aware you're deciding.

System 2 is slow, deliberate, and effortful. It's what you think of as "thinking." It evaluates evidence, considers tradeoffs, and reaches conclusions you can defend.

Here's the uncomfortable part: System 1 makes the vast majority of your decisions, including ones you swear you reasoned through. System 2 mostly shows up after the fact to justify what System 1 already chose.

This isn't a flaw to eliminate. System 1 is fast for a reason — deliberating on every micro-choice would be paralysis. But it does mean that your most important decisions are often System 1 pattern-matches dressed in System 2 language. The point isn't to override your gut. It's to know when to trust it and when to second-guess it.

The four ways decisions actually go wrong

Across the decision-making literature — Kahneman on biases, Gary Klein on expert intuition, Annie Duke on probabilistic thinking, Charlie Munger on mental models — the failure modes cluster into four categories.

Failure modeWhat it looks likeCommon example
Outcome biasJudging the decision by the result, not the process"It worked, so it was right" — even when you got lucky
Confirmation biasSeeking evidence that supports what you already believeResearching only the data that backs your choice
Availability biasOverweighting what's vivid, recent, or easy to recallFearing plane crashes more than car accidents
ResultingConfusing decision quality with outcome qualityBeating yourself up for a good decision that went bad

The biggest one — and the one most worth understanding deeply — is resulting, a term popularised by former poker champion Annie Duke. You make a smart, well-reasoned bet under uncertainty. It doesn't work out. You conclude the decision was bad.

It wasn't. The decision was good. The outcome was bad. These are different things, and confusing them means you'll either learn the wrong lesson (and stop making good bets) or learn no lesson at all (and keep making bad ones because they happened to work). In a world where most important decisions involve uncertainty, separating decision quality from outcome quality is the single highest-leverage cognitive skill you can build.

When intuition is actually right

The picture isn't entirely grim. Gary Klein's research on firefighters, military commanders, and emergency room doctors shows that expert intuition — what he calls recognition-primed decision-making — is remarkably accurate in the right conditions.

The conditions are specific:

  • Repetitive, high-feedback environments. Chess masters, paramedics, and experienced traders develop reliable intuition because they get fast, clear feedback on whether their pattern-matches were right.

  • Stable rules. The chess board doesn't change. Weather patterns don't suddenly invert. Domains with stable underlying logic build trustworthy intuition.

  • Many repetitions. You need thousands of reps for the pattern library to become reliable.

The trap is using intuition in domains where these conditions don't hold. Stock pickers, political pundits, and most strategic-decision-makers operate in environments where feedback is slow, rules shift, and rep counts are low. Their intuition feels just as confident as the chess master's — and is dramatically less reliable.

The honest question to ask yourself before trusting your gut: Have I been in this exact situation, with fast feedback, hundreds of times? If yes, trust the gut. If no, slow down and reach for System 2.

The infrastructure of a good decision

Decisions made under uncertainty have a hidden architecture. Most people only see the surface — the choice itself — and miss the layers underneath that determine whether the choice was any good.

FRAME What problem am I actually solving? What does success look like? OPTIONS What choices am I actually considering? INFORMATION What do I know? What am I assuming? PROBABILITIES & PAYOFFS What's the likelihood of each outcome? What's the cost of being wrong? DECISION What I chose · why · how confident REVIEW What happened · what I learned

Every layer is a place where decisions quietly fail. A good decision journal forces you to walk down the layers in order — and creates a record you can come back to, six months or six years later, to learn from.

The decision journal: a working template

This is the most useful tool I know for getting better at decisions over time. It's adapted from frameworks used by Annie Duke, Shane Parrish (Farnam Street), and Daniel Kahneman.

The point isn't to use it for every decision. Most decisions are small, reversible, and not worth the friction. The journal is for consequential, hard-to-reverse decisions made under uncertainty — the ones where you genuinely don't know what the right answer is, and where the cost of being wrong is real.

The Decision Journal Template

1. The Frame

  • What decision am I making? (One sentence.)
  • Why am I making it now?
  • What does success look like 6 months / 1 year / 5 years from now?

2. The Situation

  • What's happening right now that's prompting this?
  • What's the state of the world I'm deciding within?
  • What's my emotional state as I write this?

3. The Options

  • Option A:
  • Option B:
  • Option C: (Force yourself to find at least three. Two-option framing is a known trap.)
  • What would I do if none of these were available?

4. What I Know vs. What I'm Assuming

  • Facts I'm certain of:
  • Things I believe but haven't verified:
  • Things I'm explicitly assuming:

5. Probabilities & Payoffs

  • For my likely choice: how confident am I it's right? (Force a number, e.g., 70%.)
  • What's the upside if I'm right?
  • What's the downside if I'm wrong?
  • Is the downside reversible? (If yes, bias toward speed. If no, slow down.)

6. The Decision

  • What I'm choosing:
  • The single most important reason:
  • The strongest argument against my choice:

7. The Pre-Mortem (Gary Klein's trick — imagine you're 12 months in the future and the decision was a disaster. What happened?)

  • The most likely failure mode:
  • What would I see early if it were going wrong?
  • What's my exit / pivot plan if it does?

8. The Review (Filled in 3, 6, or 12 months later)

  • What actually happened?
  • Was the decision good, separate from the outcome? (Resulting check.)
  • What would I do differently with the same information?
  • What did I learn that generalises?

The format itself isn't sacred. The discipline is. Writing this down forces you to surface assumptions you'd otherwise leave implicit, name the uncertainty you'd otherwise hide from yourself, and create a record you can hold yourself accountable to.

Three habits that make the journal work

Tools without habits are just files. Three practices turn the journal from a Notion page you abandon into a feedback loop that actually improves your thinking.

Calibrate your confidence. When you write "70% confident," go back six months later and check: of all the things you said you were 70% confident about, did 70% actually pan out? Most people are wildly overconfident. Tracking this is humbling and incredibly clarifying.

Separate decisions from outcomes — every single time. When reviewing, force yourself to grade the decision and the outcome on different scales. Decision quality: A. Outcome: C. This is the antidote to resulting, and it's the only way to keep making good bets when some of them inevitably go bad.

Look for patterns across entries. After 10 or 20 entries, you'll start seeing your own failure modes. Maybe you systematically underestimate timelines. Maybe you overweight the loudest voice in the room. Maybe you decide too fast when stressed and too slow when comfortable. The patterns are the prize.

The questions that reveal whether you're ready to decide

Before any consequential decision, three questions cut through most of the noise:

Is the framing right? Most bad decisions aren't bad answers — they're good answers to the wrong question. Ask: What am I actually trying to achieve here? Then ask it again. The first answer is usually too narrow.

Have I genuinely considered the alternative? Not "could I list it" — could you argue for the option you're rejecting, with the same intensity as the option you're choosing? If not, you haven't decided. You've rationalised.

Will I be able to tell, later, whether this was a good decision? If the answer is no — if there's no observable signal that would distinguish a good outcome from a bad one — the decision isn't well-defined yet. Go back to the frame.


The point of all this isn't to turn every choice into a 30-minute exercise. Most decisions don't deserve that. The point is to recognise which decisions do — and to give those ones the cognitive machinery they actually require, instead of the System 1 pattern-match they'll otherwise get by default.

Better decisions don't come from being smarter. They come from being more honest with yourself about what you know, what you're assuming, and what you'll do if you're wrong. The journal is the simplest tool I know for forcing that honesty. Start one this week. In six months you'll have a record of your own thinking — and a much clearer view of how it actually works.

TWEGS Blog · Notes on mental models, systems, and decisions