<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	
	>
<channel>
	<title>Comments on: Are you special? Pascal&#8217;s wager, anthropic reasoning, and decision theory</title>
	<atom:link href="http://ordinaryideas.wordpress.com/2012/12/05/are-you-special-pascals-wager-anthropic-reasoning-and-decision-theory/feed/" rel="self" type="application/rss+xml" />
	<link>http://ordinaryideas.wordpress.com/2012/12/05/are-you-special-pascals-wager-anthropic-reasoning-and-decision-theory/</link>
	<description>As advertised</description>
	<lastBuildDate>Wed, 17 Dec 2014 04:51:36 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
	<item>
		<title>By: paulfchristiano</title>
		<link>http://ordinaryideas.wordpress.com/2012/12/05/are-you-special-pascals-wager-anthropic-reasoning-and-decision-theory/#comment-111</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Sat, 09 Feb 2013 03:18:50 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=221#comment-111</guid>
		<description><![CDATA[I basically agree with your summary.

I wrote a bit about the unconvinceability issue &lt;a href=&quot;http://ordinaryideas.wordpress.com/2012/12/11/improbable-simple-hypotheses-are-unbelievable/&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;.

For example, you say &quot;In real life, extreme likelihood ratios for extreme improbabilities are rather common, e.g., what was the prior probability of my typing this exact paragraph?&quot; But the magic in that case was that the hypothesis that you would write this exact paragraph is a very complex hypothesis. It is easy to get lots of evidence for complex hypotheses, and much harder (and in extreme cases impossible) to get lots of evidence for simple hypotheses. My intuition is that 33 bits is not many for complicated hypotheses, but it is an awful lot for simple hypotheses. Maybe you disagree? I&#039;m not sure if I&#039;m slicing things up the right way, and it would be cool if my views shifted a lot.

I think one-sided skepticism is justified based on anthropic considerations, for particular simple indexical assertions like &quot;I am super special and my decisions significantly affect 10^100 other decision-makers.&quot; This is pretty much just the simulation argument---if you think are the one in 10^100, you need to think about how many of the 10^100 are delusional, and how many are the one.

You can&#039;t stretch &quot;but lots people could come up with some other clever-sounding justification&quot; that far. The richest man in the world really is special, and he knows that while maybe there are O(10) other people who can come up with similarly compelling arguments for their own specialness, there aren&#039;t O(100). Similarly, the world&#039;s most impressive academic by popular vote knows there are maybe O(100) similarly poised people but not O(1000). [I made up numbers, but hopefully the idea rings true.]]]></description>
		<content:encoded><![CDATA[<p>I basically agree with your summary.</p>
<p>I wrote a bit about the unconvinceability issue <a href="http://ordinaryideas.wordpress.com/2012/12/11/improbable-simple-hypotheses-are-unbelievable/" rel="nofollow">here</a>.</p>
<p>For example, you say &#8220;In real life, extreme likelihood ratios for extreme improbabilities are rather common, e.g., what was the prior probability of my typing this exact paragraph?&#8221; But the magic in that case was that the hypothesis that you would write this exact paragraph is a very complex hypothesis. It is easy to get lots of evidence for complex hypotheses, and much harder (and in extreme cases impossible) to get lots of evidence for simple hypotheses. My intuition is that 33 bits is not many for complicated hypotheses, but it is an awful lot for simple hypotheses. Maybe you disagree? I&#8217;m not sure if I&#8217;m slicing things up the right way, and it would be cool if my views shifted a lot.</p>
<p>I think one-sided skepticism is justified based on anthropic considerations, for particular simple indexical assertions like &#8220;I am super special and my decisions significantly affect 10^100 other decision-makers.&#8221; This is pretty much just the simulation argument&#8212;if you think are the one in 10^100, you need to think about how many of the 10^100 are delusional, and how many are the one.</p>
<p>You can&#8217;t stretch &#8220;but lots people could come up with some other clever-sounding justification&#8221; that far. The richest man in the world really is special, and he knows that while maybe there are O(10) other people who can come up with similarly compelling arguments for their own specialness, there aren&#8217;t O(100). Similarly, the world&#8217;s most impressive academic by popular vote knows there are maybe O(100) similarly poised people but not O(1000). [I made up numbers, but hopefully the idea rings true.]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eliezer Yudkowsky</title>
		<link>http://ordinaryideas.wordpress.com/2012/12/05/are-you-special-pascals-wager-anthropic-reasoning-and-decision-theory/#comment-110</link>
		<dc:creator><![CDATA[Eliezer Yudkowsky]]></dc:creator>
		<pubDate>Fri, 08 Feb 2013 19:23:56 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=221#comment-110</guid>
		<description><![CDATA[The way you introduce this problem does make it seem exceptionally similar to the Pascal&#039;s Mugging situation.  In particular - for some odd reason Nick left this part out of the paper where he introduced PM to mainstream philosophy, possible because he didn&#039;t want to try introducing Solomonoff Induction - the essential problem with PM is that the computational complexity of hypotheses involving large numbers falls off vastly slower than the hypotheses themselves increase in size.  This is where the entire problem with PM comes from - that, plus the fact that the mugger is at least, say, 1.00001 times as likely to follow through on stated threats as to follow through on the opposite of stated threats, i.e., the likelihood ratio from an actual SuperMugger&#039;s behavior to observed reality is not *exactly* 1:1 for the ones that reward your behavior vs. those that punish the behavior.

Anyway, the problem with PM is that under computational-complexity formulations of priors, the probability decreases vastly more slowly than the utilities increase in size.  In your introduction, the problem is that people trying to apply a calibration-overconfidence principle will find themselves unable to drive down their priors very far.  Hanson&#039;s reply, if valid, is a solution to PM because it restores the balance of prior probability falloff vs. utility increase.  It would work the same way for the problem you introduced if we just decided that we&#039;re allowed to actually say &quot;seven billion to one&quot; for prior odds when there&#039;s a reference class of known size that large, the same way we&#039;re actually allowed to say &quot;125 million to one&quot; for lottery tickets.

In Hanson&#039;s original solution to PM, though, we get the problem that we are basically *never* allowed to believe in the Mugger even if they part the heavens with a gesture, show us the machine running reality and give us a careful explanation of exactly why they decided to present us with this problem.  I am still not sure how to resolve this one.

A similar but not quite analogous problem of unconvinceability would apply if we allowed &quot;seven billion to one&quot; without overconfidence adjustments for the prior of helping the world, but then started applying lots of clever-sounding overconfidence adjustments whenever somebody tried to build up a likelihood ratio in favor of being able to help the world - e.g. &quot;Oh, sure, you scored over a million to one on that test of mathematical ability, but maybe someone else has some other clever-sounding justification for thinking they can help the world.&quot;  In this case the problem seems to stem from a one-sided skepticism in which we&#039;re allowed to assign very extreme prior odds without worrying about overconfidence, but then we&#039;re not allowed to use any extreme likelihood ratios to climb back up.  In real life, extreme likelihood ratios for extreme improbabilities are rather common, e.g., what was the prior probability of my typing this exact paragraph?  Making it more difficult to climb out of the prior improbability of saving the world must imply some special skeptical burden beyond that involved in a mere -33 bit prior or so - unlike the PM case, 33 bits of info wouldn&#039;t ordinarily be difficult to obtain unless there was some special epistemic difficulty associated with getting extreme likelihood ratios.  We can find candidates for what these special epistemic difficulties might be, but then the situation has moved beyond what&#039;s analogous to Pascal&#039;s Mugging.]]></description>
		<content:encoded><![CDATA[<p>The way you introduce this problem does make it seem exceptionally similar to the Pascal&#8217;s Mugging situation.  In particular &#8211; for some odd reason Nick left this part out of the paper where he introduced PM to mainstream philosophy, possible because he didn&#8217;t want to try introducing Solomonoff Induction &#8211; the essential problem with PM is that the computational complexity of hypotheses involving large numbers falls off vastly slower than the hypotheses themselves increase in size.  This is where the entire problem with PM comes from &#8211; that, plus the fact that the mugger is at least, say, 1.00001 times as likely to follow through on stated threats as to follow through on the opposite of stated threats, i.e., the likelihood ratio from an actual SuperMugger&#8217;s behavior to observed reality is not *exactly* 1:1 for the ones that reward your behavior vs. those that punish the behavior.</p>
<p>Anyway, the problem with PM is that under computational-complexity formulations of priors, the probability decreases vastly more slowly than the utilities increase in size.  In your introduction, the problem is that people trying to apply a calibration-overconfidence principle will find themselves unable to drive down their priors very far.  Hanson&#8217;s reply, if valid, is a solution to PM because it restores the balance of prior probability falloff vs. utility increase.  It would work the same way for the problem you introduced if we just decided that we&#8217;re allowed to actually say &#8220;seven billion to one&#8221; for prior odds when there&#8217;s a reference class of known size that large, the same way we&#8217;re actually allowed to say &#8220;125 million to one&#8221; for lottery tickets.</p>
<p>In Hanson&#8217;s original solution to PM, though, we get the problem that we are basically *never* allowed to believe in the Mugger even if they part the heavens with a gesture, show us the machine running reality and give us a careful explanation of exactly why they decided to present us with this problem.  I am still not sure how to resolve this one.</p>
<p>A similar but not quite analogous problem of unconvinceability would apply if we allowed &#8220;seven billion to one&#8221; without overconfidence adjustments for the prior of helping the world, but then started applying lots of clever-sounding overconfidence adjustments whenever somebody tried to build up a likelihood ratio in favor of being able to help the world &#8211; e.g. &#8220;Oh, sure, you scored over a million to one on that test of mathematical ability, but maybe someone else has some other clever-sounding justification for thinking they can help the world.&#8221;  In this case the problem seems to stem from a one-sided skepticism in which we&#8217;re allowed to assign very extreme prior odds without worrying about overconfidence, but then we&#8217;re not allowed to use any extreme likelihood ratios to climb back up.  In real life, extreme likelihood ratios for extreme improbabilities are rather common, e.g., what was the prior probability of my typing this exact paragraph?  Making it more difficult to climb out of the prior improbability of saving the world must imply some special skeptical burden beyond that involved in a mere -33 bit prior or so &#8211; unlike the PM case, 33 bits of info wouldn&#8217;t ordinarily be difficult to obtain unless there was some special epistemic difficulty associated with getting extreme likelihood ratios.  We can find candidates for what these special epistemic difficulties might be, but then the situation has moved beyond what&#8217;s analogous to Pascal&#8217;s Mugging.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
