Are you special? Pascal’s wager, anthropic reasoning, and decision theory

Here is an argument that some people might find compelling:

It may be that the world is mad, and that as the only sane person around it falls on me to make sure we don’t all kill ourselves. If that’s the case, then my impact on the world may be huge. Let’s say that in this case, I can improve the world by 1%.

Maybe the claim that I’m particularly influential, call it proposition P, isn’t certain. But at least there’s a good chance.  Subjectively it feels like about 1%, since if I looked at 100 similarly surprising facts, I would expect one of them to be true. (I wouldn’t be that surprised to discover that I’m the most important person ever…) That still leaves me with the ability to improve the world by 0.01% in expectation, which looks pretty good. I might as well not even worry about stuff I could do that would improve the world by a mere 0.001%, like being an extraordinarily successful entrepreneur.

What is wrong with this argument? Intuitively, the trouble is that out of the 7 billion people on Earth, at most a handful can be so important. So even if you discovered evidence that suggested P quite strongly, you ought to remain skeptical. Even if a magic 8-ball which lied only one time in a million told you that you were the most influential person alive, you should still bet against it—after all, 7,000 people will hear this particular lie, while only one will be right. (Setting aside the fact that your mere possession of such an 8-ball constitutes much more than million to one evidence!)

If you have some clever argument that you aren’t “in the same reference class” as those other 7 billion people, you need to be awfully sure that it would be difficult to manufacture that argument if you weren’t in fact the most influential person. If you had a 1 in a million chance of being able to delude yourself into thinking you were special, you’d still be wrong nearly 99.99% of the time. But replies the skeptic…

Putting a prior probability of 1 in 7 billion on something plausible is ridiculous! After all, if the argument you just gave has even a 1% chance of being wrong, then I might have a prior probability of up to 1%! Do you think you could make 100 arguments that compelling, before you messed one of them up?

One response to the situation is to say that you really are so confident, because this kind of anthropic prior improbability is a special case. I think this is probably untenable, because your reasoning really isn’t that good. If you had to make 7 billion independent arguments as complicated as this one, I’d be surprised if you didn’t mess up one of them on a technicality.

Another response to this situation is to throw up your hands and discount the possibility P as an instance of Pascal’s mugging. Maybe we don’t understand why we shouldn’t act on the basis of such small possibilities of large upsides, but it’s intuitively obvious it would be wrong.

If we take the perspective of evidential or timeless decision theory, however, this problem vanishes. In these theories, we use a different decision rule: take the action which you would be happiest to learn that someone in your situation had taken. To decide what to do in situation S, compute E[ U | “in situation S I would pick action A” ] and E[U | “in situation S I would pick action B” ], and choose whichever action leads to the higher utility.

In this framework, we should no longer assign a non-negligible probability to being confused about anthropic questions, because such questions are never asked–the relevant properties are baked directly into the decision rule. Suppose that there are a billion people, P is true for exactly one of them, and I receive some evidence that is a million times more likely if P is true. Then I’m given the option to take some gambit, which increases U by 100 if P holds, and decreases U by 1 otherwise. Now if I am 99% sure that my basic picture about reality is correct I can reason:

In 99% of (impossible) possible worlds, there are 1000 observers with the evidence I have. P is true for one of them and not true for 999 of them. So if I choose to take the gambit, I will lose 999 utility and gain 100, which is a net loss.

In the remaining 1% of possible worlds, maybe it’s just me, and maybe property P is true. And in those worlds I would gain 100 utility. This is only 1 utility in expectation, which doesn’t offset the -900 from the other worlds.

Of course, I’ve swept a few important things under the rug; most importantly I’ve assumed that U is non-indexical. (It works fine if U = “total # of happy years of life” or U = “total # of happy years of life for people with my experiences so far” or so on. But if U = “# of happy years of life I have” then it is going to come down to anthropic questions in the definition of “I”.)

The original Pascal’s mugging

Incidentally, Pascal’s mugging is structurally identical to the argument we just discussed. Nick Bostrom describes an unarmed mugger who approaches M. Pascal:

Mugger: Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfill my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life.

Pascal: I admit I see no flaw in your mathematics.

Whatever clever argument we might suggest Pascal could use to decide that the mugger’s offer is unattractive, the mugger could always ask: “But surely, M. Pascal, there is some chance that you are mistaken?” This seems to be something of a reductio against unbounded utility. Robin Hanson is reported to have observed that, in any world large enough to contain 10 quadrillion (or whatever number) of valued objects, there are (reasonably likely to be) a comparable number of observers; most of them who believe that they have the power to create or destroy so much value must be deluded. But more importantly, each of them who is so deluded could create a constant amount of value themselves. So my large EV from the possibility I’m not deluded is balanced by my large EV from controlling more folks’ actions if I am deluded.  And now the situation is transparently the same as with our proposition P. We don’t need to assign any probabilities near 1 to avoid the trouble.

(Of course, you are still going to run into divergent sums if you accept the kind of arguments Pascal does in the example, which seems to be a fundamental problem with unbounded utilities. But Pascal’s mugging is already a problem if you take some mind-boggling upper-bound on the size of the universe, and at least this works then.)

2 thoughts on “Are you special? Pascal’s wager, anthropic reasoning, and decision theory

  1. The way you introduce this problem does make it seem exceptionally similar to the Pascal’s Mugging situation. In particular – for some odd reason Nick left this part out of the paper where he introduced PM to mainstream philosophy, possible because he didn’t want to try introducing Solomonoff Induction – the essential problem with PM is that the computational complexity of hypotheses involving large numbers falls off vastly slower than the hypotheses themselves increase in size. This is where the entire problem with PM comes from – that, plus the fact that the mugger is at least, say, 1.00001 times as likely to follow through on stated threats as to follow through on the opposite of stated threats, i.e., the likelihood ratio from an actual SuperMugger’s behavior to observed reality is not *exactly* 1:1 for the ones that reward your behavior vs. those that punish the behavior.

    Anyway, the problem with PM is that under computational-complexity formulations of priors, the probability decreases vastly more slowly than the utilities increase in size. In your introduction, the problem is that people trying to apply a calibration-overconfidence principle will find themselves unable to drive down their priors very far. Hanson’s reply, if valid, is a solution to PM because it restores the balance of prior probability falloff vs. utility increase. It would work the same way for the problem you introduced if we just decided that we’re allowed to actually say “seven billion to one” for prior odds when there’s a reference class of known size that large, the same way we’re actually allowed to say “125 million to one” for lottery tickets.

    In Hanson’s original solution to PM, though, we get the problem that we are basically *never* allowed to believe in the Mugger even if they part the heavens with a gesture, show us the machine running reality and give us a careful explanation of exactly why they decided to present us with this problem. I am still not sure how to resolve this one.

    A similar but not quite analogous problem of unconvinceability would apply if we allowed “seven billion to one” without overconfidence adjustments for the prior of helping the world, but then started applying lots of clever-sounding overconfidence adjustments whenever somebody tried to build up a likelihood ratio in favor of being able to help the world – e.g. “Oh, sure, you scored over a million to one on that test of mathematical ability, but maybe someone else has some other clever-sounding justification for thinking they can help the world.” In this case the problem seems to stem from a one-sided skepticism in which we’re allowed to assign very extreme prior odds without worrying about overconfidence, but then we’re not allowed to use any extreme likelihood ratios to climb back up. In real life, extreme likelihood ratios for extreme improbabilities are rather common, e.g., what was the prior probability of my typing this exact paragraph? Making it more difficult to climb out of the prior improbability of saving the world must imply some special skeptical burden beyond that involved in a mere -33 bit prior or so – unlike the PM case, 33 bits of info wouldn’t ordinarily be difficult to obtain unless there was some special epistemic difficulty associated with getting extreme likelihood ratios. We can find candidates for what these special epistemic difficulties might be, but then the situation has moved beyond what’s analogous to Pascal’s Mugging.

    • I basically agree with your summary.

      I wrote a bit about the unconvinceability issue here.

      For example, you say “In real life, extreme likelihood ratios for extreme improbabilities are rather common, e.g., what was the prior probability of my typing this exact paragraph?” But the magic in that case was that the hypothesis that you would write this exact paragraph is a very complex hypothesis. It is easy to get lots of evidence for complex hypotheses, and much harder (and in extreme cases impossible) to get lots of evidence for simple hypotheses. My intuition is that 33 bits is not many for complicated hypotheses, but it is an awful lot for simple hypotheses. Maybe you disagree? I’m not sure if I’m slicing things up the right way, and it would be cool if my views shifted a lot.

      I think one-sided skepticism is justified based on anthropic considerations, for particular simple indexical assertions like “I am super special and my decisions significantly affect 10^100 other decision-makers.” This is pretty much just the simulation argument—if you think are the one in 10^100, you need to think about how many of the 10^100 are delusional, and how many are the one.

      You can’t stretch “but lots people could come up with some other clever-sounding justification” that far. The richest man in the world really is special, and he knows that while maybe there are O(10) other people who can come up with similarly compelling arguments for their own specialness, there aren’t O(100). Similarly, the world’s most impressive academic by popular vote knows there are maybe O(100) similarly poised people but not O(1000). [I made up numbers, but hopefully the idea rings true.]

Leave a reply to paulfchristiano Cancel reply