Back to essays
Mental Models · Systems Thinking

Second-Order Thinking

And then what happens? The discipline of seeing past the obvious solution to its actual consequences — and why almost no one does it under pressure.

14min readTopicDecisions & systemsLevelIntroductory

In 1902, the colonial government of French Indochina had a problem with rats in Hanoi. The newly built sewer system had become an unintentionally perfect rat habitat — warm, dark, fed by the city's waste, and connected to every neighborhood by a network of underground tunnels. Plague was a real concern. The rats had to go. So the administrators did what seemed like the obvious thing: they offered a bounty. One cent for every rat tail brought in. The plan worked beautifully at first. Tails arrived by the thousands. The government paid out, congratulated itself on the elegant alignment of incentives, and waited for the rat population to crash. It didn't. Soon, officials began to notice something disturbing: the city was full of tailless rats. Hunters, having figured out that a rat could only produce one tail, were cutting them off and releasing the rats back into the sewers to breed. Some entrepreneurial Hanoians had even started farming rats specifically to harvest tails. The bounty hadn't reduced the rat population. It had created an industry that increased it.

This story, sometimes called the Cobra Effect after a similar incident in colonial India, has become the canonical illustration of what happens when you stop thinking after the first step. The Hanoi administrators weren't stupid. Their bounty program would have worked perfectly if the only thing it did was incentivize rat-killing. But every action set in a real-world system causes more than its immediate effect. It causes a second-order effect, when the system reacts to the first effect. And then a third-order effect, when the system reacts to that. Most failed plans, most botched policies, most companies that destroyed themselves by trying to fix a problem — almost all of these failures share a single cognitive structure. They were good first-order solutions. They didn't survive contact with the second order.

This is the discipline of second-order thinking. The phrase was sharpened by the investor Howard Marks in his book The Most Important Thing, but the underlying idea is much older. The 19th-century French economist Frédéric Bastiat wrote a famous essay in 1850 distinguishing "what is seen and what is not seen" — the immediate, visible consequences of an action versus the cascading effects that play out over time and remain invisible to people who only look once. Charlie Munger spent decades arguing that the difference between good and bad investors was almost entirely the willingness to ask, after every plan, the simplest possible follow-up question: "And then what?"

This essay is about that question — what it does, why most people skip it, and how to actually train yourself to ask it under the conditions where it matters most.

What second-order thinking actually is

Most reasoning stops at the first order. You take an action; the action has an immediate effect; you compare that effect to your goal and call it good or bad. This is fast, intuitive, and works well for simple closed problems — the kind a hunter-gatherer brain evolved to handle. Throw a rock at the rabbit; rabbit dies; eat dinner. First-order reasoning is sufficient because the system in which the action takes place doesn't react in any complicated way.

But almost every problem worth solving in modern life happens inside a system: a market, a relationship, an organization, an ecosystem, a political body, a dynamic of supply and demand. Systems have the awkward property of responding to interventions. When you push on a system, the system pushes back, adjusts, routes around the push, finds new equilibria, and produces effects that are often opposite to the ones you intended. Second-order thinking is the discipline of asking, before you push, how the system will react to my push, and what that reaction will then cause.

The structure of the question is recursive. The first order is: what will my action immediately do? The second order is: given what that immediately does, how will the system then respond, and what will those responses cause? The third order is the same question applied to the results of the second order, and so on. Most people stop at order one. The good thinkers go to order two. A small minority — investors, generals, public-policy designers, founders who survive — push to orders three and four when the stakes warrant it.

The crucial point is that the second-order effects are usually the ones that determine whether the action was a good idea. The first-order effect of a price control is that prices stay low. The second-order effect is that supply contracts because no one wants to sell at the controlled price. The third-order effect is shortages, queues, and black markets. Anyone evaluating the policy by its first-order effect alone — "did the price stay low?" — would call it a success. Anyone looking past the first order would call it a disaster. The same data, the same world; different orders of analysis produce opposite verdicts.

Bastiat, Marks, and the long lineage

The intellectual roots of second-order thinking go back at least to the 19th century. The French economist Frédéric Bastiat wrote a 1850 essay titled Ce qu'on voit et ce qu'on ne voit pas ("That Which Is Seen and That Which Is Not Seen") that remains, by some measures, the most concise statement of the principle ever published. Bastiat opens with the parable of the broken window: a shopkeeper's son breaks a window, and a passing observer offers the consoling thought that this is actually good for the economy — the glazier will earn money replacing the window, and that money will circulate. Bastiat's response is the cleanest example of second-order thinking on record: the glazier earns money, yes — but the shopkeeper, who would have spent that same money on a new pair of shoes or a book, is now poorer by exactly that amount, and the cobbler or bookseller earns nothing. The observer saw what was seen (the glazier's gain). The economist saw what was not seen (the cobbler's loss). The first-order analysis declared the broken window a stimulus; the second-order analysis recognized it as a pure destruction of wealth.

"There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen."

— Frédéric Bastiat, 1850

Bastiat's framework was passed down through generations of economists — Henry Hazlitt's 1946 book Economics in One Lesson is essentially a 200-page elaboration of Bastiat's principle applied to twentieth-century policy. From there, the idea moved into investing. Howard Marks, the co-founder of Oaktree Capital Management, made "second-level thinking" a centerpiece of his investment philosophy, arguing in The Most Important Thing (2011) that the difference between a good investor and an average one is almost entirely the willingness to think past the first level. The first-level investor, seeing a great company, says "buy." The second-level investor asks: "yes, it's a great company, but is that already priced in? What does the price assume? How will the price react if expectations are met versus exceeded versus missed?" Same observation; very different decisions.

Charlie Munger, Warren Buffett's longtime partner, was perhaps the most relentless evangelist for the discipline. His advice, repeated for decades in every form he could find, came down to a single instruction: when someone presents you with a plan, ask "and then what?" Then ask it again. Then ask it again. The plans that survive three rounds are usually fine. The plans that don't are usually disasters waiting to happen.

Why we skip the second order

If second-order thinking is so obviously valuable, why doesn't everyone do it? The answer is that the cognitive, emotional, and social environment we live in actively rewards first-order reasoning and punishes second-order reasoning, and most people respond accordingly without noticing.

First-order reasoning is faster. The immediate effect of an action is, almost by definition, the most visible and easiest to compute. Second-order effects require modeling the system, anticipating reactions, and tracing causal chains across time. That work takes more cognitive energy and more time. In a world that rewards speed — fast decisions, hot takes, "decisive leadership" — the cost of pausing to think two moves ahead can be high enough that people skip it as a rational response to their incentives.

First-order effects are vivid; second-order effects are abstract. When the rat-bounty program in Hanoi was paying out for tails, the bag of tails was concrete and motivating. The growing population of farmed rats was diffuse, slow, and harder to perceive until much later. Vividness drives attention; abstraction loses to it. Most people don't ignore second-order effects because they don't believe in them — they ignore them because the first-order effects are right there in front of their eyes, demanding to be acted on.

Second-order thinking makes you the buzzkill. In any group setting, the person who points out that the obvious-seeming plan has unattractive second-order consequences is, by social default, the killjoy. They're slowing things down, raising objections, refusing to celebrate. Even when they're right, the social cost of being the second-order skeptic in a room of first-order enthusiasts is non-trivial. Many people learn over time to keep their second-order analysis to themselves, which means it doesn't influence the decision and the bad plan goes through anyway.

Organizations select for first-order optimizers. Most performance- evaluation systems reward visible, near-term, attributable outcomes. A manager who hits this quarter's number gets promoted; the manager who avoided a slow-moving disaster two years out doesn't get credit because the disaster never happened, and the absence of a counterfactual catastrophe is invisible. This is why so many organizations are filled with first-order thinkers in senior positions: the second-order thinkers either left, got passed over, or learned to mimic the first-order style to survive performance reviews.

Hindsight makes first-order failures look like fluke events. When a first-order solution produces a second-order disaster, the natural narrative response is "no one could have seen that coming." This is almost always wrong — someone could have seen it coming, often someone did, often the warnings were dismissed at the time. But the framing protects everyone involved from accountability and lets the same pattern repeat next quarter on a different problem. The system doesn't learn, because the system has built-in defenses against learning.

The structural problem

Second-order thinkers do not get rewarded for the disasters they prevent, because prevented disasters are invisible. They do get punished for being slow, cautious, or contrarian in real-time. This asymmetry quietly removes second-order thinkers from the rooms where decisions are made — and is why so many organizations keep making the same kinds of avoidable mistakes for decades.

The classic failure patterns

Once you know what to look for, second-order failures cluster into a small number of recognizable patterns. Each one has been studied extensively under different names. Recognizing them in the wild is most of what second-order thinking, in practice, becomes.

Pattern 1

The Cobra Effect

You incentivize a proxy for what you want. People game the proxy without producing what you want. The Hanoi rat tails. Bug bounties paying per bug, leading to deliberately introduced bugs. Sales commissions creating customer-hostile behavior.

Pattern 2

Goodhart's Law

When a measure becomes a target, it ceases to be a good measure. Test scores, KPIs, citation counts — once people optimize them, the correlation with the underlying thing you cared about often breaks. (See: Goodhart's Law essay.)

Pattern 3

Principal-Agent Problems

The person acting on your behalf has different interests than yours. Brokers paid by commission push trades; doctors paid per procedure perform procedures; managers paid in stock manage to the stock. The first-order incentive doesn't survive the second-order optimization.

Pattern 4

Tragedy of the Commons

A shared resource gets depleted because each individual user, acting in their own first-order interest, ignores the second-order effect of cumulative depletion. Overfishing, antibiotic resistance, traffic, public discourse — same pattern at different scales.

Pattern 5

Streisand Effect

Trying to suppress information makes it more visible. The act of suppression is itself a signal that the information is important, drawing attention. Banning a book sells more copies; deleting an embarrassing tweet creates a screenshot industry; threatening lawsuits creates news cycles.

Pattern 6

Reactance & Backlash

Push too hard for a position and people instinctively push back, often adopting the opposite. Aggressive marketing breeds skepticism; heavy-handed regulation breeds non-compliance; relentless persuasion breeds the very resistance the persuader was trying to overcome.

Pattern 7

Adaptation & Hedonic Treadmill

Improvements in conditions cause people to recalibrate expectations, eliminating much of the benefit. Salary raise → new lifestyle → still feels tight. Faster commute → moving farther out → same daily duration. The system absorbs the gain by shifting baseline.

Pattern 8

The Law of Conservation of Risk

Make something safer and people behave more recklessly. Safer cars produce faster driving; helmets produce more aggressive cycling; safety nets produce more risk-taking. Total risk often returns to the same level via behavioral adjustment.

Most second-order disasters in the real world are some combination of these patterns. Recognizing the pattern is half the work — once you've seen the cobra effect once, you start seeing it everywhere it's about to happen. The other half is being willing to actually slow down and check, in a moment of decision, whether the plan in front of you is one of these patterns wearing a different costume.

A worked case: the Cobra Effect

The cobra effect deserves its own walkthrough because it's the cleanest case study of how second-order failure plays out in real time, and because the structure is the same whether you're a colonial administrator, a product manager, or a parent trying to incentivize chores.

What makes the cobra effect such a good teaching example is that each individual actor was behaving rationally at every step. The administrators rationally responded to a rat infestation by paying for rat-killing. The hunters rationally maximized their income by harvesting tails efficiently. The rat-farmers rationally noticed the price of tails and saw an entrepreneurial opportunity. Nobody was being stupid; everyone was optimizing in their own first-order interest. The disaster emerged from the composition of those individual rational responses, in a way that no single actor's first-order analysis could have detected.

This is the key insight: second-order failures usually aren't caused by stupid people doing stupid things. They're caused by reasonable people doing reasonable things in the wrong sequence, in a system whose feedback loops they didn't model. The fix isn't to be smarter than the people in the system. It's to ask the simple question they didn't: given that everyone in the system will act in their own interest, what's the equilibrium that emerges, and is that the equilibrium I wanted?

Where second-order thinking does the most work

Some domains reward second-order thinking far more than others. The pattern is clear: the more interactive the system, the more delayed the feedback, and the more rational the other actors, the more second-order thinking pays off.

Investing

Markets are perhaps the purest second-order environment in modern life. Every other actor is also trying to anticipate price movements, which means the price of an asset already incorporates a great deal of first-order analysis. To beat the market, you have to think at a higher order than the marginal participant — not "is this a good company?" but "is this a better company than the price suggests?" Howard Marks built a career on the observation that most investors stop at level one, which means level-two thinking has persistent edge.

Public policy

Every policy is an intervention in a system where millions of actors will respond. Rent control, minimum wages, drug prohibition, financial regulation, immigration policy — each has well-studied first-order effects and equally well-studied second-order effects, and the first-order effects are almost always what gets debated publicly while the second-order effects are what determine whether the policy succeeded. Bastiat's broken window has been re-litigated under new names every decade since 1850, and every decade some version of "the seen but not the unseen" wins the political argument despite losing the policy outcome.

Product and pricing

Product decisions are second-order rich because users are rational adapters. Lower the price → competitors lower theirs, and you're back where you started with less margin. Add a feature → users expect the next feature for free. Aggressive sales tactics → user trust degrades. Most product disasters are first-order solutions to user problems that ignore how the user will respond once the solution is deployed.

Personal relationships

Closer to home, every relationship is a system where the other person responds to your behavior. Criticism aimed at producing improvement often produces resentment instead. Demands for reassurance often produce withdrawal. Bringing up a complaint at the wrong moment can entrench the behavior you wanted to change. In every case, the first-order action seems reasonable; the second-order response is what determines whether the relationship gets better or worse.

Personal Example

The performance-review trap

A manager wants their team member to take more initiative. First-order solution: tell them so directly. "I need you to take more initiative." This sounds clear, fair, and constructive.

Second-order: the team member, hearing this, becomes self-conscious about every action — does this count as initiative? Will that be seen as overstepping? They become more cautious, not more proactive, because the explicit feedback has made every move legible to evaluation. The opposite of what the manager wanted, produced by exactly the action the manager thought would help.

The "and then what?" discipline

Second-order thinking has one practical, deployable form, and it's the question Munger spent decades evangelizing: "And then what?" Asked once after a proposed action, it forces you to see the second order. Asked recursively, it carries you to the third and fourth orders. The question is almost embarrassingly simple. The hard part is asking it before you've already committed to the plan.

Consider how it works applied to the Hanoi bounty. "We'll pay one cent per rat tail brought in." And then what? People will go out and kill rats to collect bounties. And then what? Some people will figure out it's easier to harvest tails than kill rats. And then what? The rat population will keep breeding, but we'll be paying out as if we were reducing it. And then what? We'll be subsidizing rat farming. Stop. Don't launch the program.

Three rounds of "and then what?" — under five minutes of thought — would have saved the Hanoi administration months of paid-out bounties, a measurable increase in the rat population, and a permanent place in the cautionary-tales literature. The discipline isn't intellectually demanding. It's temporally demanding: you have to do it before, not after. And that requires resisting the cultural pressure that says fast decisions are better than thoughtful ones.

For high-stakes decisions, the recursion can be made structural. Write down the proposed action. Ask "and then what?" once and write down the answer. Ask it again of that answer. Ask it again. Stop when the chain reaches something you can no longer plausibly predict. Look at the whole chain together. If anywhere on the chain you find a result you don't want, you've identified a problem with the plan that the first-order analysis missed entirely. Most of the time, the plan needs modification — sometimes complete redesign — to avoid the predictable cascade.

The good thinker doesn't ask "what is my action going to do?" — they ask "what is my action going to do, and then what is everyone going to do about that, and then where will we end up?" Most decisions look completely different by the third question.

Where second-order thinking fails

Second-order thinking is one of the most useful mental disciplines available, but, like any tool, it has clear limits. Pretending otherwise leads to its own kinds of error.

Analysis paralysis

Taken too far, second-order thinking becomes an excuse for inaction. "And then what?" applied recursively without termination conditions can stretch any decision into a multi-week analysis exercise. Most decisions don't justify that level of scrutiny — most decisions are recoverable, and the cost of moving slowly often outweighs the cost of a moderate second-order miss. The discipline needs a stopping rule: think two or three orders deep on important decisions, then act. The point of second-order thinking is to make better decisions, not to avoid making them.

Sophisticated rationalization for inaction

A close cousin of paralysis is the use of second-order analysis to justify doing nothing while feeling intellectually superior to the people who actually did something. "Yes, but have you considered the second-order effects?" can be a real contribution to the conversation or a sophisticated way of refusing to engage. Status-quo bias is comfortable for second-order thinkers, because the status quo's second-order effects are already absorbed into reality, while the proposed change's effects are speculative. This asymmetry can quietly turn second-order thinkers into perpetual skeptics who never propose anything themselves.

Compounding uncertainty

Each order of analysis multiplies the uncertainty. If you're 80% confident in your first-order prediction and 70% confident in your second-order analysis, your third-order conclusion has compounded uncertainty around 56% — barely better than guessing. By the fourth order, most analysis is fiction wearing the costume of rigor. The discipline has diminishing returns, and the people who claim to be reasoning at the seventh or eighth order are usually telling a story rather than analyzing one. Two or three orders is often where the real value lives.

Some systems are genuinely unpredictable

Some domains — markets in the very short term, complex social dynamics, technological transitions — are genuinely chaotic, in the technical sense that small differences in initial conditions produce large differences in outcomes. In these domains, second-order thinking can produce confident-sounding analyses that are no better than the first-order intuition they replaced. The honest answer for chaotic systems is often "we can't predict this; let's design for resilience to many possible outcomes" rather than "let me trace the third-order consequences carefully." Knowing which systems reward analytical second-order thinking and which require humility instead is itself part of the discipline.

Connection to other models

Second-order thinking sits naturally next to several other essays in this collection. Goodhart's Law is a specific second-order pattern (when a measure becomes a target, it ceases to be a good measure). The Pre-Mortem is a structured way to surface second-order failure modes before they happen. Inversion is the discipline of asking "what's the second-order effect of NOT doing this?" — which is just as important as the second-order effect of acting. Confirmation Bias is what stops us from honestly seeing second-order effects we don't want to see. The cluster of mental models is, in a real sense, all aimed at the same target from different angles: the world is more complex than your first instinct suggests, and the disciplines of probabilistic and systems thinking are the corrective.

How to actually use it

Second-order thinking, like calibration, becomes a habit through practice rather than reading. Here's the working version of the discipline.

The second-order discipline

1
Notice when you're at the first order

The cue is a sense of certainty about what an action will do. "This will solve it" or "this is obviously the right move" should trigger the question: am I considering the immediate effect only, or the system's response to the immediate effect? Most overconfident plans live at level one.

2
Ask "and then what?"

Apply the question literally to your proposed action. Don't accept "and then it will work" — that's just restating the first-order claim. Force yourself to predict what people, markets, systems, or institutions will do in response to your action's immediate effect. Write down the answer.

3
Apply it recursively, but with a stopping rule

Ask "and then what?" of your answer, and again of that answer. Two to three iterations is usually the right depth — enough to surface major second-order effects, not so much that you collapse into analysis paralysis. For high-stakes decisions, go deeper. For routine ones, two iterations is enough.

4
Look for the classic patterns

Match the situation against the catalog: cobra effect, Goodhart, principal-agent, tragedy of the commons, Streisand, reactance, adaptation, conservation of risk. If your plan structurally resembles any of these, the second-order failure is not speculative — it's predictable, and the plan needs redesign before launch.

5
Imagine the system from the other actors' point of view

What will my employees do in response to this policy? What will my customers do in response to this change? What will the market do in response to this product? Each actor in the system has their own first-order analysis, and the equilibrium that emerges from all those first-order analyses is the actual second-order effect.

6
Then act, with eyes open

The point of second-order thinking is not to avoid action — it's to choose actions whose downstream consequences are acceptable. Once you've worked through the chain, decide. Move forward. Be ready to update if the system surprises you. The discipline is in the looking, not in the not-deciding.

The principle, restated

Every action you take in a system causes more than its immediate, intended effect. The system reacts; people adapt; equilibria shift; second-order consequences emerge. Most decision failures aren't first-order errors but second-order blindness — perfectly reasonable solutions to first-order problems that didn't survive contact with how the world actually responded. The cure is one simple recursive question, asked before the plan is committed: and then what?

Frédéric Bastiat wrote his "seen and unseen" essay in 1850, in a country recovering from revolution and increasingly tempted by economic policies that sounded compassionate at the first order and produced misery at the second. The essay is older now than most countries. It still gets reread because the temptation it described has not gone away — and in some ways, the modern information environment, with its premium on confident hot takes and instant judgments, has made first-order reasoning more rewarded and second-order reasoning more costly than ever.

That asymmetry will not change. The reward for fast confident answers will keep going up; the reward for slow careful ones will keep being diffuse and delayed. But that is precisely what makes the discipline worth developing. In an environment where most people stop at level one, the small minority who can routinely think to level two — without falling into paralysis, sophistication-as-inaction, or false confidence in their predictions — has a quiet, persistent edge in essentially every domain that involves systems and humans. Which is, give or take, every domain that matters.

The good news is the discipline doesn't require unusual intelligence or special training. It requires the willingness to ask one question — and then what? — before you commit. Most people will never form the habit, because the cost of asking it is paid up front and the benefit is delivered in the form of disasters that didn't happen. But that's the deal, and once you take it, you start to notice something: the world is more legible than it looked at the first order. The patterns repeat. The cobra effect is everywhere, wearing different costumes. And every time you spot it before launching the program, you save yourself the slow, expensive lesson the colonial administrators of Hanoi paid for in cash.