<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	
	>
<channel>
	<title>Comments on: Avoiding Simulation Warfare with Bounded Complexity Measures</title>
	<atom:link href="https://ordinaryideas.wordpress.com/2011/12/21/avoiding-simulation-warfare-with-bounded-complexity-measures/feed/" rel="self" type="application/rss+xml" />
	<link>https://ordinaryideas.wordpress.com/2011/12/21/avoiding-simulation-warfare-with-bounded-complexity-measures/</link>
	<description>As advertised</description>
	<lastBuildDate>Wed, 17 Dec 2014 04:51:36 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
	<item>
		<title>By: Specifying a human precisely (reprise) &#124; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2011/12/21/avoiding-simulation-warfare-with-bounded-complexity-measures/#comment-149</link>
		<dc:creator><![CDATA[Specifying a human precisely (reprise) &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Sun, 24 Aug 2014 21:43:40 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=72#comment-149</guid>
		<description><![CDATA[[&#8230;] significant concern with this procedure is the one I discussed before, essentially that Solomonoff induction might end up believing the simulation argument and so this [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] significant concern with this procedure is the one I discussed before, essentially that Solomonoff induction might end up believing the simulation argument and so this [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Solomonoff Induction and Simulations &#171; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2011/12/21/avoiding-simulation-warfare-with-bounded-complexity-measures/#comment-72</link>
		<dc:creator><![CDATA[Solomonoff Induction and Simulations &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 24 May 2012 16:56:42 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=72#comment-72</guid>
		<description><![CDATA[[...] obvious way to avoid this sort of thing is to avoid the universal prior. I mentioned before the possibility of using a prior which penalizes algorithms which use a lot of time or a lot of [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] obvious way to avoid this sort of thing is to avoid the universal prior. I mentioned before the possibility of using a prior which penalizes algorithms which use a lot of time or a lot of [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: paulfchristiano</title>
		<link>https://ordinaryideas.wordpress.com/2011/12/21/avoiding-simulation-warfare-with-bounded-complexity-measures/#comment-10</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Sun, 25 Dec 2011 06:21:50 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=72#comment-10</guid>
		<description><![CDATA[If you are using the universal prior, the AI impersonator (say) doesn&#039;t ever have to run and gets access to unlimited resources (the theoretical output is then just a mathematical abstraction about which our AI reasons).

If you are using something like the speed prior and dealing with the concerns of the post, then the issue is just that pointing to a human in the universe is probably harder than pointing to something just a little later in history when the universe has been (spatially) tiled with computronium or what have you. An AI living in our future will have a more direct way of learning about the experimental setup than simulating the universe, so no computational limitation on such an AI will help avoid this failure mode.]]></description>
		<content:encoded><![CDATA[<p>If you are using the universal prior, the AI impersonator (say) doesn&#8217;t ever have to run and gets access to unlimited resources (the theoretical output is then just a mathematical abstraction about which our AI reasons).</p>
<p>If you are using something like the speed prior and dealing with the concerns of the post, then the issue is just that pointing to a human in the universe is probably harder than pointing to something just a little later in history when the universe has been (spatially) tiled with computronium or what have you. An AI living in our future will have a more direct way of learning about the experimental setup than simulating the universe, so no computational limitation on such an AI will help avoid this failure mode.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: endoself</title>
		<link>https://ordinaryideas.wordpress.com/2011/12/21/avoiding-simulation-warfare-with-bounded-complexity-measures/#comment-9</link>
		<dc:creator><![CDATA[endoself]]></dc:creator>
		<pubDate>Sun, 25 Dec 2011 05:53:36 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=72#comment-9</guid>
		<description><![CDATA[Do you think it is likely that an AGI would be able to find a human given a program describing the universe with a realistic amount of resources or are you considering this possibility even though it is unlikely? I think that it is something we need to be prepared for, but you seem to me to think that it is likely, which surprises me.]]></description>
		<content:encoded><![CDATA[<p>Do you think it is likely that an AGI would be able to find a human given a program describing the universe with a realistic amount of resources or are you considering this possibility even though it is unlikely? I think that it is something we need to be prepared for, but you seem to me to think that it is likely, which surprises me.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
