<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	
	>
<channel>
	<title>Comments on: The motivated simulator argument</title>
	<atom:link href="https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/feed/" rel="self" type="application/rss+xml" />
	<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/</link>
	<description>As advertised</description>
	<lastBuildDate>Wed, 17 Dec 2014 04:51:36 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
	<item>
		<title>By: Specifying &#8220;enlightened judgment&#8221; precisely (reprise) &#124; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-153</link>
		<dc:creator><![CDATA[Specifying &#8220;enlightened judgment&#8221; precisely (reprise) &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Wed, 27 Aug 2014 01:28:22 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-153</guid>
		<description><![CDATA[[&#8230;] and it&#8217;s hard to say how that would end. This is the human version of the problem I described here; to the extent that humans can avoid such problems it shouldn&#8217;t really be any more of a [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] and it&#8217;s hard to say how that would end. This is the human version of the problem I described here; to the extent that humans can avoid such problems it shouldn&#8217;t really be any more of a [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Specifying a human precisely (reprise) &#124; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-148</link>
		<dc:creator><![CDATA[Specifying a human precisely (reprise) &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Sun, 24 Aug 2014 21:43:38 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-148</guid>
		<description><![CDATA[[&#8230;] significant concern with this procedure is the one I discussed before, essentially that Solomonoff induction might end up believing the simulation argument and [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] significant concern with this procedure is the one I discussed before, essentially that Solomonoff induction might end up believing the simulation argument and [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: &#8220;Indirect Normativity&#8221; Write-up &#124; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-115</link>
		<dc:creator><![CDATA[&#8220;Indirect Normativity&#8221; Write-up &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Tue, 30 Jul 2013 21:51:02 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-115</guid>
		<description><![CDATA[[&#8230;] thought experiment might be occurring? This possibility is particularly troubling in light of the incentives our scheme creates&#8212;anyone who can manipulate H&#8217;s behavior can have a significant effect [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] thought experiment might be occurring? This possibility is particularly troubling in light of the incentives our scheme creates&#8212;anyone who can manipulate H&#8217;s behavior can have a significant effect [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eliezer Yudkowsky</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-113</link>
		<dc:creator><![CDATA[Eliezer Yudkowsky]]></dc:creator>
		<pubDate>Thu, 13 Jun 2013 20:59:43 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-113</guid>
		<description><![CDATA[I remark that if the problem can be framed such that a crisp AI, running on a logical decision theory with a solved problem of maximization vantage points with a no-blackmail equilbrium, would regard the simulations constructed as being of negative utility, then the AI will refuse to be motivated by the simulations so as not to give the simulators an incentive to construct those simulations in the first place.  The problem arises when the simulators can offer the AI something, such as the certain press of a reward button, such that the AI prefers the universes in which the simulations were constructed.  It should also be kept in mind that however you construct the AI so as not to be thus motivated, the simulators are trying to find a way to make it a positive-sum bargain from the AI&#039;s perspective regardless of your intentions, so you must be very confident indeed that the AI disprefers every possible scenario in which is simulated.]]></description>
		<content:encoded><![CDATA[<p>I remark that if the problem can be framed such that a crisp AI, running on a logical decision theory with a solved problem of maximization vantage points with a no-blackmail equilbrium, would regard the simulations constructed as being of negative utility, then the AI will refuse to be motivated by the simulations so as not to give the simulators an incentive to construct those simulations in the first place.  The problem arises when the simulators can offer the AI something, such as the certain press of a reward button, such that the AI prefers the universes in which the simulations were constructed.  It should also be kept in mind that however you construct the AI so as not to be thus motivated, the simulators are trying to find a way to make it a positive-sum bargain from the AI&#8217;s perspective regardless of your intentions, so you must be very confident indeed that the AI disprefers every possible scenario in which is simulated.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Generalized abstract incentives problems, and simulation arms races &#171; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-95</link>
		<dc:creator><![CDATA[Generalized abstract incentives problems, and simulation arms races &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 06 Dec 2012 03:05:01 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-95</guid>
		<description><![CDATA[[...] if it didn&#8217;t cooperate, the balance of power on Earth would be non-negligibly altered). The trouble is that such an agent can be manipulated by applying even relatively modest amounts of [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] if it didn&#8217;t cooperate, the balance of power on Earth would be non-negligibly altered). The trouble is that such an agent can be manipulated by applying even relatively modest amounts of [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Will Newsome</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-77</link>
		<dc:creator><![CDATA[Will Newsome]]></dc:creator>
		<pubDate>Sat, 26 May 2012 19:52:00 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-77</guid>
		<description><![CDATA[Though note that if there&#039;s a transcendent creator deity then &quot;original worlds&quot; aren&#039;t obviously more important than nor distinguishable from simulated worlds, depending on the preferences of the creator deity. Given that hypothesis the stellar resources might not be &quot;out there&quot; even in the vanishingly small fraction of worlds where humans are the first agents to gain access to the cosmic commons. 

(I&#039;ll note that one should withhold judgment on this idea until one has seriously considered the relevant theological arguments. Surprisingly few people seem to understand theology, even though it&#039;s clearly one of humanity&#039;s most important fields of inquiry. Politics is the mind-killer.)]]></description>
		<content:encoded><![CDATA[<p>Though note that if there&#8217;s a transcendent creator deity then &#8220;original worlds&#8221; aren&#8217;t obviously more important than nor distinguishable from simulated worlds, depending on the preferences of the creator deity. Given that hypothesis the stellar resources might not be &#8220;out there&#8221; even in the vanishingly small fraction of worlds where humans are the first agents to gain access to the cosmic commons. </p>
<p>(I&#8217;ll note that one should withhold judgment on this idea until one has seriously considered the relevant theological arguments. Surprisingly few people seem to understand theology, even though it&#8217;s clearly one of humanity&#8217;s most important fields of inquiry. Politics is the mind-killer.)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: paulfchristiano</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-76</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Fri, 25 May 2012 15:31:31 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-76</guid>
		<description><![CDATA[Three responses:

1. Humans should generally expect the simulations of them to continue obeying the same laws they always have (until there is some reason not to expect it any more).
2. Humans care about their ability to influence the world, which is much larger in basements (this accounts for why, e.g., I probably don&#039;t care so much about simulations) I think this is the principled magic that allows you to not care about simulations, and you can pull it back from UDT recommendations to subjective anticipations.
3. I think insofar as humans care about the probable continuations of their experiences, they should mostly be concerned with simulations.]]></description>
		<content:encoded><![CDATA[<p>Three responses:</p>
<p>1. Humans should generally expect the simulations of them to continue obeying the same laws they always have (until there is some reason not to expect it any more).<br />
2. Humans care about their ability to influence the world, which is much larger in basements (this accounts for why, e.g., I probably don&#8217;t care so much about simulations) I think this is the principled magic that allows you to not care about simulations, and you can pull it back from UDT recommendations to subjective anticipations.<br />
3. I think insofar as humans care about the probable continuations of their experiences, they should mostly be concerned with simulations.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Vladimir Slepnev</title>
		<link>https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/#comment-75</link>
		<dc:creator><![CDATA[Vladimir Slepnev]]></dc:creator>
		<pubDate>Fri, 25 May 2012 08:28:26 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=214#comment-75</guid>
		<description><![CDATA[But humans are sensors too! How do you know that you care about the real world rather than about all the simulators which have incentives to mess with you? That should be especially difficult if you&#039;re an AI researcher :-) If humans can minimize the impact of meddling simulations by using some sort of magic of subjective anticipation, then maybe we should figure out how that magic works and use it in our AIs...]]></description>
		<content:encoded><![CDATA[<p>But humans are sensors too! How do you know that you care about the real world rather than about all the simulators which have incentives to mess with you? That should be especially difficult if you&#8217;re an AI researcher <span class='wp-smiley wp-emoji wp-emoji-smile' title=':-)'>:-)</span> If humans can minimize the impact of meddling simulations by using some sort of magic of subjective anticipation, then maybe we should figure out how that magic works and use it in our AIs&#8230;</p>
]]></content:encoded>
	</item>
</channel>
</rss>
