Back to the index

Inadequate Equilibria

Table of Contents

Inadequate Equilibria by Eliezer Yudkowsky

Highlights

loc: 10 Inadequate Equilibria is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won’t be effective at pursuing some widely desired goal. An efficient market is one where smart individuals should generally doubt that they can spot overpriced or underpriced assets. We can ask an analogous question, however, about the “efficiency” of other human endeavors.

loc: 20 generalized notions of efficiency and inefficiency take priority in explanation; they are the topic of the first half of this book. […] The second half of this book will then delve into background questions of mindset and methodology going into inadequacy analysis, and ways that this analysis can go wrong, particularly for the underconfident.

Page: 1 This is a book about two incompatible views on the age-old question: “When should I think that I may be able to do something unusually well?” These two viewpoints tend to give wildly different, nearly cognitively nonoverlapping analyses of questions like: My doctor says I need to eat less and exercise, but a lot of educated-sounding economics bloggers are talking about this thing called the “Shangri-La Diet.” They’re saying that in order to lose weight, all you need to do is consume large quantities of flavorless, high-calorie foods at particular times of day; and they claim some amazing results with this diet. Could they really know better than my doctor? Would I be able to tell if they did? My day job is in artificial intelligence and decision theory. And I recall the dark days before 2015, when there was plenty of effort and attention going into advancing the state of the art in AI capabilities, but almost none going into AI alignment: better understanding AI designs and goals that can safely scale with capabilities. Though interest in the alignment problem has since increased quite a bit, it still makes sense to ask whether at the time I should have inferred from the lack of academic activity that there was no productive work to be done here; since if there were reachable fruits, wouldn’t academics be taking them? Should I try my hand at becoming an entrepreneur? Whether or not it should be difficult to spot promising ideas in a scientific field, it certainly can’t be easy to think up a profitable idea for a new startup. Will I be able to find any good ideas that aren’t already taken? The effective altruism community is a network of philanthropists and researchers that try to find the very best ways to benefit others per dollar, in full generality. Where should effective altruism organizations like GiveWell expect to find low-hanging fruit—neglected interventions ripe with potential? Where should they look to find things that our civilization isn’t already doing about as well as can be done?

Page: 2 When I think about problems like these, I use what feels to me like a natural generalization of the economic idea of efficient markets. The goal is to predict what kinds of efficiency we should expect to exist in realms beyond the marketplace, and what we can deduce from simple observations. For lack of a better term, I will call this kind of thinking inadequacy analysis.

Page: 2 Toward the end of this book, I’ll try to refute an alternative viewpoint that is increasingly popular among some of my friends, one that I think is ill-founded. This viewpoint is the one I’ve previously termed “modesty,” and the message of modesty tends to be: “You can’t expect to be able to do X that isn’t usually done, since you could just be deluding yourself into thinking you’re better than other people.”

Page: 3 It’s also easy to imagine reasons an observer might have been skeptical. I wasn’t making up my critique of Japan myself; I was reading other economists and deciding that I trusted the ones who were saying that the Bank of Japan was doing it wrong… … Yet one would expect the governing board of the Bank of Japan to be composed of experienced economists with specialized monetary expertise. How likely is it that any outsider would be able to spot an obvious flaw in their policy? How likely is it that someone who isn’t a professional economist (e.g., me) would be able to judge which economic critiques of the Bank of Japan were correct, or which critics were wise?

Page: 8 If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort.

Page: 9 I can’t predict a 5% move in Microsoft stock in the next two months, and neither can you. If your uncle tells an anecdote about how he tripled his investment in NetBet.com last year and he attributes this to his skill rather than luck, we know immediately and out of hand that he is wrong. Warren Buffett at the peak of his form couldn’t reliably triple his money every year. If there is a strategy so simple that your uncle can understand it, which has apparently made him money—then we guess that there were just hidden risks built into the strategy, and that in another year or with less favorable events he would have lost half as much as he gained. Any other possibility would be the equivalent of a $20 bill staying on the floor of Grand Central Station for ten years while a horde of physics PhDs searched for it using naked eyes, microscopes, and machine learning.

Page: 10 C.C.E.: How do you make money off this special knowledge of yours? ELIEZER: I can’t. The market also collectively knows that the Bank of Japan is pursuing a bad monetary policy and has priced Japanese equities accordingly. So even though I know the Bank of Japan’s policy will make Japanese equities perform badly, that fact is already priced in; I can’t expect to make money by short-selling Japanese equities. C.C.E.: I see. So exactly who is it, on this theory of yours, that is being stupid and passing up a predictable payout? ELIEZER: Nobody, of course! Only the Bank of Japan is allowed to control the trend line of the Japanese money supply, and the Bank of Japan’s governors are not paid any bonuses when the Japanese economy does better. They don’t get a million dollars in personal bonuses if the Japanese economy grows by a trillion dollars. C.C.E.: So you can’t make any money off knowing better individually, and nobody who has the actual power and authority to fix the problem would gain a personal financial benefit from fixing it? Then we’re done! No anomalies here; this sounds like a perfectly normal state of affairs.

Page: 12 But even without that detailed analysis, in the epistemological background we have a completely different picture from the modest one. We have a picture of the world where it is perfectly plausible for an econblogger to write up a good analysis of what the Bank of Japan is doing wrong, and for a sophisticated reader to reasonably agree that the analysis seems decisive, without a deep agonizing episode of Dunning-Kruger-inspired self-doubt playing any important role in the analysis. iii.

Page: 15 We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here, someone would already have built it. The underlying question here is: How strongly should I expect that this extremely common medical problem has been thoroughly considered by my civilization, and that there’s nothing new, effective, and unconventional that I can personally improvise?

Page: 19 In a pre-market economy, when you offer somebody fifty carrots for a roasted antelope leg, your offer says something about how impressed you are with their work hunting down the antelope and how much reward you think that deserves from you. If they’ve dealt generously with you in the past, perhaps you ought to offer them more. This is the only instinctive notion people start with for what a price could mean: a personal interaction between Alice and Bob reflecting past friendships and a balance of social judgments. In contrast, the economic notion of a market price is that for every loaf of bread bought, there is a loaf of bread sold; and therefore actual demand and actual supply are always equal. The market price is the input that makes the decreasing curve for demand as a function of price meet the increasing curve for supply as a function of price. This price is an “is” statement rather than an “ought” statement, an observation and not a wish. In particular, an efficient market, from an economist’s perspective, is just one whose average price movement can’t be predicted by you.

Page: 22 I once read a rather clueless magazine article that made fun of a political prediction market on the basis that when a new poll came out, the price of the prediction market moved. “It just tracks the polls!” the author proclaimed. But the point of the prediction market is not that it knows some fixed, objective chance with high accuracy. The point of a prediction market is that it summarizes all the information available to the market participants. If the poll moved prices, then the poll was new information that the market thought was important, and the market updated its belief, and this is just the way things should be.

Page: 22 This means that the efficiency of a market is assessed relative to your own intelligence, which is fine. Indeed, it’s possible that the concept should be called “relative efficiency.” Yes, a superintelligence might be able to predict price trends that no modern human hedge fund manager could; but economists don’t think that today’s markets are efficient relative to a superintelligence. Today’s markets may not be efficient relative to the smartest hedge fund managers, or efficient relative to corporate insiders with secret knowledge that hasn’t yet leaked. But the stock markets are efficient relative to you, and to me, and to your Uncle Albert who thinks he tripled his money through his incredible acumen in buying NetBet.com.

Page: 23 If that’s all true, it’s not a coincidence that neither I nor any of the other onlookers could make money on our advance prediction. The startup equity market was inefficient (a price underwent a predictable decline), but it wasn’t exploitable.4 There was no way to make a profit just by predicting that Sequoia had overpaid for the stock it bought. Because, at least as of 2017, the market lacks a certain type and direction of liquidity: you can’t short-sell startup equity.

Page: 24 Let’s imagine there are 100,000 houses in Boomville, of which 10,000 have been for sale in the last year or so. Suppose there are 20,000 fools who think that housing prices in Boomville can only go up, and 10,000 rational hedge fund managers who think that the shale-oil business may collapse and lead to a predictable decline in Boomville house prices. There’s no way for the hedge fund managers to short Boomville house prices—not in a way that satisfies the optimistic demand of 20,000 fools for Boomville houses, not in a way that causes house prices to actually decline. The 20,000 fools just bid on the 10,000 available houses until the skyrocketing price of the houses makes 10,000 of the fools give up. Some smarter agents might decline to buy, and so somewhat reduce demand. But the smarter agents can’t actually visit Boomville and make hundreds of thousands of dollars off of the overpriced houses. The price is too high and will predictably decline, relative to public information, but there’s no way you can make a profit on knowing that. An individual who owns an existing house can exploit the inefficiency by selling that house, but rational market actors can’t crowd around the inefficiency and exploit it until it’s all gone. Whereas a predictably underpriced house, put on the market for predictably much less than its future price, would be an asset that any of a hundred thousand rational investors could come in and snap up. So a frothy housing market may see many overpriced houses, but few underpriced ones. Thus it will be easy to lose money in this market by buying stupidly, and much harder to make money by buying cleverly. The market prices will be inefficient—in a certain sense stupid—but they will not be exploitable

Page: 26 Oh, if only PredictIt didn’t charge that 10% fee on profits, that 5% fee on withdrawals! If only they didn’t have the $850 limit! If only the US didn’t have such high income taxes, and didn’t limit participation in overseas prediction markets! I could have bought Clinton shares at 60 cents on PredictIt and Trump shares at 20 cents on Betfair, winning a dollar either way and getting a near-guaranteed 25% return until the prices were in line! Curse those silly rules, preventing me from picking up that free money! Does that complaint sound reasonable to you? If so, then you haven’t yet fully internalized the notion of an inefficient-but-inexploitable market. If the taxes, fees, and betting limits hadn’t been there, the PredictIt and BetFair prices would have been the same.

Page: 27 To answer a question like this, we need an analysis not of the world’s efficiency or inexploitability but rather of its adequacy—whether all the low-hanging fruit have been plucked. A duly modest skepticism, translated into the terms we’ve been using so far, might say something like this: “Around 7% of the population has severe Seasonal Affective Disorder, and another 20% or so has weak Seasonal Affective Disorder. Around 50% of tested cases respond to standard lightboxes. So if the intervention of stringing up a hundred LED bulbs actually worked, it could provide a major improvement to the lives of 3% of the US population, costing on the order of $1000 each (without economies of scale). Many of those 9 million US citizens would be rich enough to afford that as a treatment for major winter depression. If you could prove that your system worked, you could create a company to sell SAD-grade lighting systems and have a large market. So by postulating that you can cure SAD this way, you’re postulating a world in which there’s a huge quantity of metaphorical free energy—a big energy gradient that society hasn’t traversed. Therefore, I’m skeptical of this medical theory for more or less the same reason that I’m skeptical you can make money on the stock market: it postulates a $20 bill lying around that nobody has already picked up.”

Page: 28 Let’s say that within some slice through society, the obvious low-hanging fruit that save more than ten thousand lives for less than a hundred thousand dollars total have, in fact, been picked up. Then I propose the following terminology: let us say that that part of society is adequate at saving 10,000 lives for $100,000. And if there’s a convincing case that this property does not hold, we’ll say this subsector is inadequate (at saving 10,000 lives for $100,000).

Page: 32 Usually when we find trillion-dollar bills lying on the ground in real life, it’s a symptom of (1) a central-command bottleneck that nobody else is allowed to fix, as with the European Central Bank wrecking Europe, or (2) a system with enough moving parts that at least two parts are simultaneously broken, meaning that single actors cannot defy the system. To modify an old aphorism: usually, when things suck, it’s because they suck in a way that’s a Nash equilibrium.

Page: 32 So inadequacy is even more important than exploitability on a day-to-day basis, because it’s inadequacy-generating situations that lead to low-hanging fruits large enough to be worthwhile at the individual level.

Page: 32 A critical analogy between an inadequate system and an efficient market is this: even systems that are horribly inadequate from our own perspective are still in a competitive equilibrium. There’s still an equilibrium of incentives, an equilibrium of supply and demand, an equilibrium where (in the central example above) all the researchers are vigorously competing for prestigious publications and using up all available grant money in the course of doing so. There’s no free energy anywhere in the system.

Page: 35 The second and third sentences say, “If something like inadequacy analysis were already a well-known idea in economics, then I would expect my smart economist friend Robin Hanson to cite it. Even if Robin started out not knowing, I expect his other economist friends would tell him, or that one of the many economists reading his blog would comment on it. I expect the population of economists reading Robin’s blog and papers to be adequate to the task of telling Robin about an existing field here, if one already existed.” Adequacy arguments are ubiquitous, and they’re much more common in everyday reasoning than arguments about efficiency or exploitability.

Page: 38 People presumably care about curing SAD—if they could effortlessly push a button to instantly cure SAD, they would do so—but there’s a big difference between “caring” and “caring enough to prioritize this over nearly everything else I care about,” and it’s the latter that would be needed for researchers to be willing to personally trade away non-small amounts of expected money or esteem for new treatment ideas.

Page: 39 To state all of this more precisely: Suppose there is some space of strategies that you’re competent enough to think up and execute on. Inexploitability has a single unit attached, like “$” or “effective SAD treatments,” and says that you can’t find a strategy in this space that knowably gets you much more of the resource in question than other agents. The kind of inexploitability I’m interested in typically arises when a large ecosystem of competing agents is genuinely trying to get the resource in question, and has access to strategies at least as good (for acquiring that resource) as the best options in your strategy space. Inadequacy with respect to a strategy space has two units attached, like “effective SAD treatments / research hours” or “QALYs / $,” and says that there is some set of strategies a large ecosystem of agents could pursue that would convert the denominator unit into the numerator unit at some desired rate, but the agents are pursuing strategies that in fact result in a lower conversion rate. The kind of inadequacy I’m most interested in arises when many of the agents in the ecosystem would prefer that the conversion occur at the rate in question, but there’s some systemic blockage preventing this from happening.

Page: 40 Systems tend to be inexploitable with respect to the resources that large ecosystems of competent agents are trying their hardest to pursue, like fame and money, regardless of how adequate or inadequate they are. And if there are other resources the agents aren’t adequate at converting fame, money, etc. into at a widely desired rate, it will often be due to some systemic blockage. Insofar as agents have overlapping goals, it will therefore often be harder than it looks to find real instances of exploitability, and harder than it looks to outperform an inadequate equilibrium. But more local goals tend to overlap less: there isn’t a large community of specialists specifically trying to improve my wife’s well-being.

Page: 42 if you relax your self-skepticism even slightly, it’s trivial to come up with an a priori inadequacy argument for just about anything. Talk about “efficient markets” in any less than stellar forum, and you’ll soon get half a dozen comments from people deriding the stupidity of hedge fund managers. And, yes, the financial system is broken in a lot of ways, but you still can’t double your money trading S&P 500 stocks. “Find one thing to deride, conclude inadequacy” is not a good rule. At the same time, lots of real-world social systems do have inadequate equilibria and it is important to be able to understand that, especially when we have clear observational evidence that this is the case. A blanket distrust of inadequacy arguments won’t get us very far either.

Page: 43 There’s a toolbox of reusable concepts for analyzing systems I would call “inadequate”—the causes of civilizational failure, some of which correspond to local opportunities to do better yourself. I shall, somewhat arbitrarily, sort these concepts into three larger categories: Decisionmakers who are not beneficiaries; Asymmetric information; and above all, Nash equilibria that aren’t even the best Nash equilibrium, let alone Pareto-optimal.

Page: 47 When Jaminet and Jaminet wrote the above, in 2012, there was a single hospital in the United States that could provide correctly formulated parenteral nutrition, namely the Boston Children’s Hospital; nowhere else. This formulation was illegal to sell across state lines. A few years after the Boston Children’s Hospital developed their formula—keeping in mind the heap of dead babies continuing to pile up in the meanwhile—there developed a shortage of “certified lipids” (FDA-approved “fat” for adding to parenteral nutrition). For a year or two, the parenteral nutrition contained no fat at all which is worse and can kill adults. You see, although there’s nothing special about the soybean oil in parenteral nutrition, there was only one US manufacturer approved to add it, and that manufacturer left the market, so… As of 2015, the state of affairs was as follows: The FDA eventually solved the problem with the shortage of US-certified lipids, by… allowing US hospitals to import parenteral nutrition bags from Europe. And it only took them two years’ worth of dead patients to figure that out!

Page: 56 the frustrating parts of civilization are the times when you’re stuck in a Nash equilibrium that’s Pareto-inferior to other Nash equilibria. I mean, it’s not surprising that humans have trouble getting to non-Nash optima like “both sides cooperate in the Prisoner’s Dilemma without any other means of enforcement or verification.” What makes an equilibrium inadequate, a fruit that seems to hang tantalizingly low and yet somehow our civilization isn’t plucking, is when there’s a better stable state and we haven’t reached it. VISITOR: Indeed. Moving from bad equilibria to better equilibria is the whole point of having a civilization in the first place.

Page: 57 Even in my world, Simplicio, coordination isn’t as simple as everyone jumping simultaneously every time one person shouts “Jump!” For coordinated action to be successful, you need to trust the institution that says what the action should be, and a majority of people have to trust that institution, and they have to know that other people trust the institution, so that everyone expects the coordinated action to occur at the critical time, so that it makes sense for them to act too. That’s why we have policy prediction markets and… there doesn’t seem to be a word in your language for the timed-collective-action-threshold-conditional-commitment… hold on, this cultural translator isn’t making any sense. “Kickstarter”? You have the key concept, but you use it mainly for making video games?

> See Hanson (I think) on funding large projects in a similar way – everyone commits to putting in the money if enough other people do the same. Lower number of free riders.

Page: 63 To sum up, academic science is embedded in a big enough system with enough separate decisionmakers creating incentives for other decisionmakers that it almost always takes the path of least resistance. The system isn’t in the best Nash equilibrium because nobody has the power to look over the system and choose good Nash equilibria. It’s just in a Nash equilibrium that it wandered into, which includes statistical methods that were invented in the first half of the 20th century and editors not demanding that people cite replications.

Page: 66 VISITOR: Do they not have markets on your planet? Because on my planet, when you manufacture your product in a crazy, elaborate, expensive way that produces an inferior product, someone else will come along and rationalize the process and take away your customers. CECIE: We have markets, but there’s this unfortunate thing called “regulatory capture,” of which one kind is “occupational licensing.” As an example, it used to be that chairs were carefully hand-crafted one at the time by carpenters who had to undergo a lengthy apprenticeship, and indeed, they didn’t like it when factories came along staffed by people who specialized in just carving a single kind of arm. But the factory-made chairs were vastly cheaper and most of the people who insisted on sticking to handcrafts soon went out of business. Now imagine: What if the chair-makers had been extremely respectable—had already possessed very high status? What if their profession had an element of danger? What if they’d managed to frighten everyone about the dangers of improperly made chairs that might dump people on the ground and snap their necks?

Page: 67 VISITOR: But why would the legislators go along with that? CECIE: Because the carpenters would have a big, concentrated incentive to figure out how to make legislators do it—maybe by hiring very persuasive people, or by subtle bribery, or by not-so-subtle bribery. Insofar as occupational licensing works to the benefit of professionals at the expense of consumers, occupational licensing represents a kind of regulatory capture, which happens when a few regulatees have a much more concentrated incentive to affect the regulation process. Regulatory capture in turn is a kind of commons problem, since every citizen shares the benefits of non-captured regulation, but no individual citizen has a sufficient incentive to unilaterally spend their life attending to that particular regulatory problem. So occupational licensing is regulatory capture is a commons problem is a coordination problem.

Page: 71 governments, I thought. Or did I misunderstand that? Why wouldn’t patients emigrate to—or just visit—countries that made better hospitals legal? CECIE: The forces acting on governments with high technology levels are mostly the same between countries, so all the governments of those countries tend to have their medical system screwed up in mostly the same way (not least because they’re imitating each other). Some aspects of dysfunctional insurance and payment policies are special to the US, but even the relatively functional National Health System in Britain still has failure of professional specialization. (Though they at least don’t require doctors to have philosophy degrees.)

Page: 73 It’s sufficient to note that the system is in equilibrium and it has causes for the equilibrium settling there—causes, if not justifications. You can’t go against the system’s default without going against the forces that underpin that default. A doctor who gives a baby a nutrition formula that isn’t FDA-approved will lose their job. A hospital that doesn’t fire that kind of doctor will be sued. A scientist that writes proposals for a big, expensive, definitive study won’t get a grant, and while they were busy writing those failed grant proposals, they’ll have lost their momentum toward tenure. So no, you can’t just try out a competing policy of not killing babies. Not more than once.

Page: 74 Anyway, from my perspective, it’s no surprise if you don’t yet feel like you understand. We’ve only begun to survey the malfunctions of the whole system, which would further include the FDA, and the clinical trials, and the p-hacking. And the way venture capital is structured, and equity-market regulations. And the insurance companies, and the tax code. And the corporations who contract with the insurance companies. And the corporations’ employees. And the politicians. And the voters.

> Depressing!

Page: 77 The key phenomenon underlying the social molasses is that there’s a self-reinforcing equilibrium of beliefs. Maybe a lot of the Series A investors think the idea of entrepreneurs needing to have red hair is objectively silly. But they expect Series B investors to believe it. So the Series A investors don’t invest in blonde-haired entrepreneurs. So the seed investors are right to believe that “Series A investors won’t invest in blonde-haired companies” even if a lot of the reason why Series A investors aren’t investing is not that they believe the stereotype but that they believe that Series B investors believe the stereotype. And from the outside, of course, all that investors can see is that most investors aren’t investing in blonde-haired entrepreneurs—which just goes to reinforce everyone’s belief that everyone else believes that red-haired entrepreneurs do better.10

Page: 85 I suspect some of the dynamics in entrepreneur-land are there because many venture capitalists run into entrepreneurs that are smarter than them, but who still have bad startups. A venture capitalist who believes clever-sounding arguments will soon be talked into wasting a lot of money. So venture capitalists learn to distrust clever-sounding arguments because they can’t distinguish lies from truth, when they’re up against entrepreneurs who are smarter than them. Similarly, the average politician is smarter than the average voter, so by now most voters are just accustomed to a haze of plausible-sounding arguments. It’s not that you can’t possibly explain a Nash equilibrium. It’s that there are too many people advocating changes in the system for their own reasons, who could also draw diagrams that sounded equally convincing to someone who didn’t already understand Nash equilibria. Any talk of systemic change on this level would just be lost in a haze of equally plausible-sounding-to-the-average-voter blogs, talking about how quantitative easing will cause hyperinflation.

Page: 90 What broke the silence about artificial general intelligence (AGI) in 2014 wasn’t Stephen Hawking writing a careful, well-considered essay about how this was a real issue. The silence only broke when Elon Musk tweeted about Nick Bostrom’s Superintelligence, and then made an off-the-cuff remark about how AGI was “summoning the demon.” Why did that heave a rock through the Overton window, when Stephen Hawking couldn’t? Because Stephen Hawking sounded like he was trying hard to appear sober and serious, which signals that this is a subject you have to be careful not to gaffe about. And then Elon Musk was like, “Whoa, look at that apocalypse over there!!” After which there was the equivalent of journalists trying to pile on, shouting, “A gaffe! A gaffe! A… gaffe?” and finding out that, in light of recent news stories about AI and in light of Elon Musk’s good reputation, people weren’t backing them up on that gaffe thing. Similarly, to heave a rock through the Overton window on the War on Drugs, what you need is not state propositions (although those do help) or articles in The Economist. What you need is for some “serious” politician to say, “This is dumb,” and for the journalists to pile on shouting, “A gaffe! A gaffe… a gaffe?” But it’s a grave personal risk for a politician to test whether the public atmosphere has changed enough, and even if it worked, they’d capture very little of the human benefit for themselves.

Page: 95 And then some of us have much, much more horrible problems to worry about. Problems that take more than reading Wikipedia entries to understand, so that the pool of potential solvers is even smaller. But even just considering this particular heap of dead babies, we know from observation that this part must be true: If you imagine everyone on Earth who fits the qualifications for the dead-baby problem—enough scientific literacy to understand relevant facts about metabolic pathways, and the caring, and the maximization, and enough scrappiness to be the first one who gets started on it, meeting in a conference room to divide up Earth’s most important problems, with the first subgroup taking on the most neglected problems demanding the most specialized background knowledge, and the second taking on the second-most-incomprehensible set of problems, until the crowdedness of the previously most urgent problem decreases the marginal impact of further contributions to the point where the next-worst problem at that level of background knowledge and insight becomes attractive… and so on down the ladders of urgency inside the levels of discernment… then there must be such a long and terrible list of tasks left undone, and so few people to understand and care, that saving a few hundred babies per year from dying or suffering permanent brain damage didn’t make the list. So it has been observed, and so it must be.

Page: 98 For a fixed amount of inadequacy, there is only so much dysfunction that needs to be invoked to explain it. By the nature of inadequacy there will usually be more than one thing going wrong at a time… but even so, there’s only a bounded amount of failure to be explained. Every possible dysfunction is competing against every other possible dysfunction to explain the observed data. Sloppy cynicism will usually be wrong, just like your Facebook acquaintances who attribute civilizational dysfunctions to giant malevolent conspiracies. If you’re sloppy, then you’re almost always going to find some way to conclude, “Oh, those physicists are just part of the broken academic system, what would they really know about the Higgs boson?” You will detect inadequacy every time you go looking for it, whether or not it’s there. If you see the same vision wherever you look, that’s the same as being blind.

Page: 99 There are people who would simply never try to put up 130 light bulbs in their house—because if that worked, surely some good and diligent professional researcher would have already tried it. The medical system would have made it a standard treatment, right? The doctor would already know about it, right? And sure, sometimes people are stupid, but we’re also people and we’re also stupid so how could we amateurs possibly do better than current researchers on SAD, et cetera. Often the most commonly applicable benefit from a fancy rational technique will be to cancel out fancy irrationality.

Page: 102 In the modest world, either you think you’re better than doctors and all the civilization backing them, or you admit you’re not as good and that you ought to defer to them. If you don’t defer to doctors, then you’ll end up as one of those people who try feeding their children organic herbs to combat cancer; the outside view says that that’s what happens to most non-doctors who dare to think they’re better than doctors. On the modest view, it’s not that we hold up a thumb and eyeball the local competence level, based mostly on observation and a little on economic thinking; and then update on our observed relative performance; and sometimes say, “This varies a lot. I’ll have to check each time.” Instead, every time you decide whether you think you can do better, you are declaring what sort of person you are.

Page: 104 The goal is simply to be the sort of person who, in worlds with closet goblins, ends up believing in closet goblins, and in worlds without closet goblins, ends up disbelieving in closet goblins. Avoiding beliefs that sound archaic does relatively little to help you learn that there are goblins in a world where goblins exist, so it does relatively little to establish that there aren’t goblins in a world where they don’t exist. Examining particular empirical predictions of the goblin hypothesis, on the other hand, does provide strong evidence about what world you’re in.

Page: 105 In my experience, people who don’t viscerally understand Moloch’s toolbox and the ubiquitously broken Nash equilibria of real life and how group insanity can arise from intelligent individuals responding to their own incentives tend to unconsciously translate all assertions about relative system competence into assertions about relative status. If you don’t see systemic competence as rare, or don’t see real-world systemic competence as driven by rare instances of correctly aligned incentives, all that’s left is status. All good and bad output is just driven by good and bad individual people, and to suggest that you’ll have better output is to assert that you’re individually smarter than everyone else. (This is what status hierarchy feels like from the inside: to perform better is to be better.)

Page: 106 I think that the people I was talking with had already internalized the mathematical concept of Nash equilibria, but I don’t think they were steeped in a no-free-energy microeconomic equilibrium view of all of society where most of the time systems end up dumber than the people in them due to multiple layers of terrible incentives, and that this is normal and not at all a surprising state of affairs to suggest.

Page: 108 This brings me to the single most obvious notion that correct contrarians grasp, and that people who have vastly overestimated their own competence don’t realize: It takes far less work to identify the correct expert in a pre-existing dispute between experts, than to make an original contribution to any field that is remotely healthy.

Page: 109 Distinguishing a correct contrarian isn’t easy in absolute terms. You are still trying to be better than the mainstream in deciding who to trust.8 For many people, yes, an attempt to identify contrarian experts ends with them trusting faith healers over traditional medicine. But it’s still in the range of things that amateurs can do with a reasonable effort, if they’ve picked up on unusually good epistemology from one source or another.

Page: 111 I’m merely emphasizing that to find a rare startup idea that is exploitable in dollars, you will have to scan and keep scanning, not pursue the first “X is broken and maybe I can fix it!” thought that pops into your head. To win, choose winnable battles; await the rare anomalous case of, “Oh wait, that could work.”

Page: 113 So a realistic lifetime of trying to adapt yourself to a broken civilization looks like: 0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” A few people, but not many, will answer “Yes” to enough instances of this question to count on the fingers of both hands. Moving on to your toes indicates that you are a crackpot. Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself (i.e., without trying to solve the further problems of widespread adoption), after a few weeks’ research and a bunch of testing and occasionally asking for help?” (See my experiments with ketogenic diets and SAD treatment; also what you would do to generate or judge a startup idea that wasn’t based on a hard science problem.) Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify. The accumulation of many judgments of the latter kind is where you get the fuel for many small day-to-day decisions (e.g., about what to eat), and much of your ability to do larger things (like solving a medical problem after going through the medical system has proved fruitless, or executing well on a startup).

Page: 114 When it comes to estimating the competence of some aspect of civilization, especially relative to your own competence, try to update hard on your experiences of failure and success. One data point is a hell of a lot better than zero data points. Worrying about how one data point is “just an anecdote” can make sense if you’ve already collected thirty data points. On the other hand, when you previously just had a lot of prior reasoning, or you were previously trying to generalize from other people’s not-quite-similar experiences, and then you collide directly with reality for the first time, one data point is huge.

Page: 116 Run experiments; place bets; say oops. Anything less is an act of self-sabotage.

Page: 117 The thesis that needs to be contrasted with modesty is not the assertion that everyone can beat their civilization all the time. It’s not that we should be the sort of person who sees the world as mad and pursues the strategy of believing a hot stock tip and investing everything. It’s just that it’s okay to reason about the particulars of where civilization might be inadequate, okay to end up believing that you can state a better monetary policy than the Bank of Japan is implementing, okay to check that against observation whenever you get the chance, and okay to update on the results in either direction. It’s okay to act on a model of what you think the rest of the world is good at, and for this model to be sensitive to the specifics of different cases.

Page: 128 You shouldn’t avoid outside-view-style reasoning in cases where it looks likely to work, like when planning your Christmas shopping. But in many contexts, the outside view simply can’t compete with a good theory.

Page: 128 Where items in a reference class differ causally in more ways than two Christmas shopping trips you’ve planned or two university essays you’ve written, or where there’s temptation to cherry-pick the reference class of things you consider “similar” to the phenomenon in question, or where the particular biases underlying the planning fallacy just aren’t a factor, you’re often better off doing the hard cognitive labor of building, testing, and acting on models of how phenomena actually work, even if those models are very rough and very uncertain, or admit of many exceptions and nuances. And, of course, during and after the construction of the model, you have to look at the data. You still need fox-style attention to detail—and you certainly need empiricism.

Page: 132 I would want the brain to reason about brains in pretty much the same way it reasons about other things in the world. And in practice, I suspect that the way I think, and the way I’d advise people in the real world to think, works very much like that: Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball. Less than a majority of the time: Think about how reliable authorities seem to be and should be expected to be, and how reliable you are—using your own brain to think about the reliability and failure modes of brains, since that’s what you’ve got. Try to be evenhanded in how you evaluate your own brain’s specific failures versus the specific failures of other brains.2 While doing this, take your own meta-reasoning at face value. … and then next, theoretically, should come the meta-meta level, considered yet more rarely. But I don’t think it’s necessary to develop special skills for meta-meta reasoning. You just apply the skills you already learned on the meta level to correct your own brain, and go on applying them while you happen to be meta-reasoning about who should be trusted, about degrees of reliability, and so on. Anything you’ve already learned about reasoning should automatically be applied to how you reason about meta-reasoning.3 Consider whether someone else might be a better meta-reasoner than you, and hence that it might not be wise to take your own meta-reasoning at face value when disagreeing with them, if you have been given strong local evidence to this effect. That probably sounded terribly abstract, but in practice it means that everything plays out in what I’d consider to be the obvious intuitive fashion.

Page: 134 The first lesson is to not carefully craft anything that it was possible to literally just improvise and test immediately in its improvised version, ever. Even if the minimum improvisable product won’t be representative of the real version. Even if you already expect the current version to fail. You don’t know what you’ll learn from trying the improvised version.

See The Lean Startup

Page: 134 The second lesson was that my model of teaching rationality by producing units for consumption at meetups wasn’t going to work, and we’d need to go with Anna’s approach of training teachers who could fail on more rapid cycles, and running centralized workshops using those teachers.

Page: 136 There are people who think we all ought to behave this way toward each other as a matter of course. They reason: on average, we can’t all be more meta-rational than average; and you can’t trust the reasoning you use to think you’re more meta-rational than average. After all, due to Dunning-Kruger, a young-Earth creationist will also think they have plausible reasoning for why they’re more meta-rational than average. … Whereas it seems to me that if I lived in a world where the average person on the street corner were Anna Salamon or Nick Bostrom, the world would look extremely different from how it actually does.

Page: 139 Modest epistemology seems to me to be taking the experiments on the outside view showing that typical holiday shoppers are better off focusing on their past track record than trying to model the future in detail, and combining that with the Dunning-Kruger effect, to argue that we ought to throw away most of the details in our self-observation. At its epistemological core, modesty says that we should abstract up to a particular very general self-observation, condition on it, and then not condition on anything else because that would be inside-viewing. An observation like, “I’m familiar with the cognitive science literature discussing which debiasing techniques work well in practice, I’ve spent time on calibration and visualization exercises to address biases like base rate neglect, and my experience suggests that they’ve helped,” is to be generalized up to, “I use an epistemology which I think is good.” I am then to ask myself what average performance I would expect from an agent, conditioning only on the fact that the agent is using an epistemology that they think is good, and not conditioning on that agent using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning or anything in particular. Only in this way can we force Republicans to agree with us… or something. (Even though, of course, anyone who wants to shoot off their own foot will actually just reject the whole modest framework, so we’re not actually helping anyone who wants to go astray.) Whereupon I want to shrug my hands helplessly and say, “But given that this isn’t normative probability theory and I haven’t seen modesty advocates appear to get any particular outperformance out of their modesty, why go there?” I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.

Page: 140 That said, let me go on beyond my true rejection and try to construct something of a reductio. Two reductios, actually. The first reductio is just, as I asked the person who proposed the signal-receiver epistemology: “Okay, so why don’t you believe in God like a majority of people’s signal receivers tell them to do?” “No,” he replied. “Just no.” “What?” I said. “You’re allowed to say ‘just no’? Why can’t I say ‘just no’ about collapse interpretations of quantum mechanics, then?”

Page: 142 To generalize, suppose we take the following rule seriously as epistemology, terming it Rule M for Modesty: Rule M: Let X be a very high-level generalization of a belief subsuming specific beliefs X1, X2, X3.… For example, X could be “I have an above-average epistemology,” X1 could be “I have faith in the Bible, and that’s the best epistemology,” X2 could be “I have faith in the words of Mohammed, and that’s the best epistemology,” and X3 could be “I believe in Bayes’s Rule, because of the Dutch Book argument.” Suppose that all people who believe in any Xi, taken as an entire class X, have an average level F of fallibility. Suppose also that most people who believe some Xi also believe that their Xi is not similar to the rest of X, and that they are not like most other people who believe some X, and that they are less fallible than the average in X. Then when you are assessing your own expected level of fallibility you should condition only on being in X, and compute your expected fallibility as F. You should not attempt to condition on being in X3 or ask yourself about the average fallibility you expect from people in X3. Then the first machine superintelligence should conclude that it is in fact a patient in a psychiatric hospital. And you should believe, with a probability of around 33%, that you are currently asleep. Many people, while dreaming, are not aware that they are dreaming. Many people, while dreaming, may believe at some point that they have woken up, while still being asleep. Clearly there can be no license from “I think I’m awake” to the conclusion that you actually are awake, since a dreaming person could just dream the same thing. Let Y be the state of not thinking that you are dreaming. Then Y1 is the state of a dreaming person who thinks this, and Y2 is the state of actually being awake. It boots nothing, on Rule M, to say that Y2 is introspectively distinguishable from Y1 or that the inner experiences of people in Y2 are actually quite different from those of people in Y1. Since people in Y1 usually falsely believe that they’re in Y2, you ought to just condition on being in Y, not condition on being in Y2. Therefore you should assign a 67% probability to currently being awake, since 67% of observer-moments who believe they’re awake are actually awake. Which is why—in the distant past, when I was arguing against the modesty position for the first time—I said: “Those who dream do not know they dream, but when you are awake, you know you are awake.” The modest haven’t formalized their epistemology very much, so it would take me some years past this point to write down the Rule M that I thought was at the heart of the modesty argument, and say that “But you know you’re awake” was meant to be a reductio of Rule M in particular, and why. Reasoning under uncertainty and in a biased and error-prone way, still we can say that the probability we’re awake isn’t just a function of how many awake versus sleeping people there are in the…

> Check this in the book again, got cut off

Page: 145 I’ve now given my critique of modesty as a set of explicit doctrines. I’ve tried to give the background theory, which I believe is nothing more than conventional cynical economics, that explains why so many aspects of the world are not optimized to the limits of human intelligence in the manner of financial prices. I have argued that the essence of rationality is to adapt to whatever world you find yourself in, rather than to be “humble” or “arrogant” a priori. I’ve tried to give some preliminary examples of how we really, really don’t live in the Adequate World where constant self-questioning would be appropriate, the way it is appropriate when second-guessing equity prices. I’ve tried to systematize modest epistemology into a semiformal rule, and I’ve argued that the rule yields absurd consequences.

Page: 145 But I’ve now said quite a few words about modest epistemology as a pure idea. I feel comfortable at this stage saying that I think modest epistemology’s popularity owes something to its emotional appeal, as opposed to being strictly derived from epistemic considerations. In particular: emotions related to social status and self-doubt.

Page: 147 ELIEZER: I’m not sure if they’re aimed at your current skill level. Why don’t you try just one interview and see how that goes before you make any complicated further plans about how to prove your skills? This fits into a very common pattern of advice I’ve found myself giving, along the lines of, “Don’t assume you can’t do something when it’s very cheap to try testing your ability to do it,” or, “Don’t assume other people will evaluate you lowly when it’s cheap to test that belief.”

Page: 147 when overconfidence is such a terrible scourge according to the cognitive bias literature, can it ever be wise to caution people against underconfidence? Yes. First of all, overcompensation after being warned about a cognitive bias is also a recognized problem in the literature; and the literature on that talks about how bad people often are at determining whether they’re undercorrecting or overcorrecting.1 Second, my own experience has been that while, yes, commenters on the Internet are often overconfident, it’s very different when I’m talking to people in person. My more recent experience seems more like 90% telling people to be less underconfident, to reach higher, to be more ambitious, to test themselves, and maybe 10% cautioning people against overconfidence.

Page: 148 Several people have now told me that the most important thing I have ever said to them is: “If you never fail, you’re only trying things that are too easy and playing far below your level.” Or, phrased as a standard Umeshism: “If you can’t remember any time in the last six months when you failed, you aren’t trying to do difficult enough things.”

Page: 148 Similarly, many people’s emotional makeup is such that they experience what I would consider an excess fear—a fear disproportionate to the non-emotional consequences—of trying something and failing. A fear so strong that you become a nurse instead of a physicist because that is something you are certain you can do. Anything you might not be able to do is crossed off the list instantly. In fact, it was probably never generated as a policy option in the first place. Even when the correct course is obviously to just try the job interview and see what happens, the test will be put off indefinitely if failure feels possible.

Page: 149 If you’ve never wasted an effort, you’re filtering on far too high a required probability of success. Trying to avoid wasting effort—yes, that’s a good idea. Feeling bad when you realize you’ve wasted effort—yes, I do that too. But some people slice off the entire realm of uncertain projects because the prospect of having wasted effort, of having been publicly wrong, seems so horrible that projects in this class are not to be considered.

Page: 149 The mark of this vulnerability, and the proof that it is indeed a fallacy, would be not testing the predictions that the modest point of view makes about your inevitable failures—even when they would be cheap to test, and even when failure doesn’t lead to anything that a non-phobic third party would rate as terrible.

Page: 150 But “I can’t do that. And you can’t either!” is a suspicious statement in everyday life. Suppose I try to juggle two balls and succeed, and then I try to juggle three balls and drop them. I could conclude that I’m bad at juggling and that other people could do better than me, which comes with a loss of status. Alternatively, I could heave a sad sigh as I come to realize that juggling more than two balls is just not possible. Whereupon my social standing in comparison to others is preserved. I even get to give instruction to others about this hard-won life lesson, and smile with sage superiority at any young fools who are still trying to figure out how to juggle three balls at a time. I grew up with this fallacy, in the form of my Orthodox Jewish parents smiling at me and explaining how when they were young, they had asked a lot of religious questions too; but then they grew out of it, coming to recognize that some things were just beyond our ken. At the time, I was flabbergasted at my parents’ arrogance in assuming that because they couldn’t solve a problem as teenagers, nobody else could possibly solve it going forward. Today, I understand this viewpoint not as arrogance, but as a simple flinch away from a painful thought and toward a pleasurable one. You can admit that you failed where success was possible, or you can smile with gently forgiving superiority at the youthful enthusiasm of those who are still naive enough to attempt to do better. Of course, some things are impossible. But if one’s flinch response to failure is to perform a mental search for reasons one couldn’t have succeeded, it can be tempting to slide into false despair.

Page: 151 And then you run across somebody who tries to tell you, not just that they can’t outguess the stock market, but that you’re not allowed to become good at it either. They claim that nobody is allowed to master the task at which they failed. Your uncle tripled his savings when he bet it all on GOOG, and this person tries to wave it off as luck. Isn’t that like somebody condescendingly explaining why juggling three balls is impossible, after you’ve seen with your own eyes that your uncle can juggle four? This isn’t a naive question. Somebody who has seen the condescension of despair in action is right to treat this kind of claim as suspicious. It ought to take a massive economics literature examining the idea in theory and in practice, and responding to various apparent counterexamples, before we accept that a new kind of near-impossibility has been established in a case where the laws of physics seem to leave the possibility open.

Page: 155 Many people seem to be the equivalent of asexual with respect to the emotion of status regulation—myself among them. If you’re blind to status regulation (or even status itself) then you might still see that people with status get respect, and hunger for that respect. You might see someone with a nice car and envy the car. You might see a horrible person with a big house and think that their behavior ought not to be rewarded with a big house, and feel bitter about the smaller house you earned by being good. I can feel all of those things, but people’s overall place in the pecking order isn’t a fast, perceptual, pre-deliberative thing for me in its own right. For many people, I gather that the social order is a reified emotional thing separate from respect, separate from the goods that status can obtain, separate from any deliberative reasoning about who ought to have those goods, and separate from any belief about who consented to be part of an implicit community agreement. There’s just a felt sense that some people are lower in various status hierarchies, while others are higher; and overreaching by trying to claim significantly more status than you currently have is an offense against the reified social order, which has an immediate emotional impact, separate from any beliefs about the further consequences that a social order causes.

Page: 157 I once summarized my epistemology like so: “Try to make sure you’d arrive at different beliefs in different worlds.” You don’t want to think in such a way that you wouldn’t believe in a conclusion in a world where it were true, just because a fallacious argument could support it. Emotionally appealing mistakes are not invincible cognitive traps that nobody can ever escape from. Sometimes they’re not even that hard to escape.

Page: 159 Regardless, when I see a supposed piece of epistemology that looks to me an awful lot like my model of status regulation, but which doesn’t seem to cohere with the patterns of correct reasoning described by theorists like E. T. Jaynes, I get suspicious. When people cite the “outside view” to argue that one should stick to projects whose ambition and impressiveness befit one’s “reference class,” and announce that any effort to significantly outperform the “reference class” is epistemically suspect “overconfidence,” and insist that moving to take into account local extenuating factors, causal accounts, and justifications constitutes an illicit appeal to the “inside view” and we should rely on more obvious, visible, publicly demonstrable signs of overall auspiciousness or inauspiciousness… you know, I’m not sure this is strictly inspired by the experimental work done on people estimating their Christmas shopping completion times. I become suspicious as well when this model is deployed in practice by people who talk in the same tone of voice that I’ve come to associate with status regulation, and when an awful lot of what they say sounds to me like an elaborate rationalization of, “Who are you to act like some kind of big shot?”

Page: 161 Modesty can take the form of an explicit epistemological norm, or it can manifest in more quiet and implicit ways, as small flinches away from painful thoughts and towards more comfortable ones. It’s the latter that I think is causing most of the problem. I’ve spent a significant amount of time critiquing the explicit norms, because I think these serve an important role as canaries piling up in the coal mine, and because they are bad epistemology in their own right. But my chief hope is to illuminate that smaller and more quiet problem.

Page: 165 Somehow, someone is going to horribly misuse all the advice that is contained within this book. Nothing I know how to say will prevent this, and all I can do is advise you not to shoot your own foot off; have some common sense; pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict; put yourself and your skills on trial in every accessible instance where you’re likely to get an answer within the next minute or the next week; and update hard on single pieces of evidence if you don’t already have twenty others.

Page: 168 But I can tell you this much: bet on everything. Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s a way of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning. Or so I hope.

Page: 168 Beyond that, though: if you’re trying to do something unusually well (a common enough goal for ambitious scientists, entrepreneurs, and effective altruists), then this will often mean that you need to seek out the most neglected problems. You’ll have to make use of information that isn’t widely known or accepted, and pass into relatively uncharted waters. And modesty is especially detrimental for that kind of work, because it discourages acting on private information, making less-than-certain bets, and breaking new ground. I worry that my arguments in this book could cause an overcorrection; but I have other, competing worries. The world isn’t mysteriously doomed to its current level of inadequacy. Incentive structures have parts, and can be reengineered in some cases, worked around in others. Similarly, human bias is not inherently mysterious. You can come to understand your own strengths and weaknesses through careful observation, and scholarship, and the generation and testing of many hypotheses. You can avoid overconfidence and underconfidence in an even-handed way, and recognize when a system is inadequate at doing X for cost Y without being exploitable in X, or when it is exploitable-to-someone but not exploitable-to-you. Modesty and immodesty are bad heuristics because even where they’re correcting for a real problem, you’re liable to overcorrect.

Back to the index

Last modified 2019-07-12 Fri 09:06. Contact max@maxjmartin.com