<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	
	>
<channel>
	<title>Comments on: Hazards for Formal Specifications</title>
	<atom:link href="http://ordinaryideas.wordpress.com/2011/12/15/hazards/feed/" rel="self" type="application/rss+xml" />
	<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/</link>
	<description>As advertised</description>
	<lastBuildDate>Wed, 17 Dec 2014 04:51:36 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
	<item>
		<title>By: Specifying a human precisely (reprise) &#124; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/#comment-147</link>
		<dc:creator><![CDATA[Specifying a human precisely (reprise) &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Sun, 24 Aug 2014 21:43:36 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=23#comment-147</guid>
		<description><![CDATA[[&#8230;] significant concern with this procedure is the one I discussed before, essentially that Solomonoff induction might end up believing the simulation [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] significant concern with this procedure is the one I discussed before, essentially that Solomonoff induction might end up believing the simulation [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Solomonoff Induction and Simulations &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/#comment-69</link>
		<dc:creator><![CDATA[Solomonoff Induction and Simulations &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 24 May 2012 16:56:34 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=23#comment-69</guid>
		<description><![CDATA[[...] desired information, and then applying Solomonoff induction to pinpoint that continuation. In some earlier posts I have written about the following objection: Solomonoff induction applied to sequence of [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] desired information, and then applying Solomonoff induction to pinpoint that continuation. In some earlier posts I have written about the following objection: Solomonoff induction applied to sequence of [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: On the Difficulty of AI Boxing &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/#comment-45</link>
		<dc:creator><![CDATA[On the Difficulty of AI Boxing &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Fri, 27 Apr 2012 03:12:59 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=23#comment-45</guid>
		<description><![CDATA[[...] likely to be manufactured by computationally simple agents trying to mess with our universe. (See hazards.) For example: we control one copy of this AI, and we&#8217;ll reward it if it answers our question [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] likely to be manufactured by computationally simple agents trying to mess with our universe. (See hazards.) For example: we control one copy of this AI, and we&#8217;ll reward it if it answers our question [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jsteinhardt</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/#comment-30</link>
		<dc:creator><![CDATA[jsteinhardt]]></dc:creator>
		<pubDate>Tue, 24 Jan 2012 06:55:09 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=23#comment-30</guid>
		<description><![CDATA[&quot;One possibility is interference from an alternative Everett branch in which a singularity went badly.&quot;

How would interference occur on macroscopic scales? That seems like it would be extraordinarily difficult.

&quot;Indeed, to the extent that we as humans believe that our own recollections are enough to provide substantial evidence about the world, and to the extent that we believe that the Solomonoff prior is a reasonable model for our own predictive frameworks, we must believe that our own brains (complete with our recollections) are most concisely described by modeling the universe that produced those recollections.&quot;

I&#039;m confused by this. Could you indicate concretely how you would specify a brain more concisely by first describing the universe?

I also don&#039;t really follow the TDT part at all. Why is the TDT agent being run on the input given to the human in the box? Is it supposed to be the AI that locates a human decision theory? In that case, why would it have such an arbitrary goal, and why would it choose to take over the universe?]]></description>
		<content:encoded><![CDATA[<p>&#8220;One possibility is interference from an alternative Everett branch in which a singularity went badly.&#8221;</p>
<p>How would interference occur on macroscopic scales? That seems like it would be extraordinarily difficult.</p>
<p>&#8220;Indeed, to the extent that we as humans believe that our own recollections are enough to provide substantial evidence about the world, and to the extent that we believe that the Solomonoff prior is a reasonable model for our own predictive frameworks, we must believe that our own brains (complete with our recollections) are most concisely described by modeling the universe that produced those recollections.&#8221;</p>
<p>I&#8217;m confused by this. Could you indicate concretely how you would specify a brain more concisely by first describing the universe?</p>
<p>I also don&#8217;t really follow the TDT part at all. Why is the TDT agent being run on the input given to the human in the box? Is it supposed to be the AI that locates a human decision theory? In that case, why would it have such an arbitrary goal, and why would it choose to take over the universe?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Avoiding Simulation Warfare with Bounded Complexity Measures &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/#comment-5</link>
		<dc:creator><![CDATA[Avoiding Simulation Warfare with Bounded Complexity Measures &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 22 Dec 2011 01:36:15 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=23#comment-5</guid>
		<description><![CDATA[[...] some decisions and conditioning the universal prior on agreement with those decisions (see here). I have argued that the behavior of the result on new decisions is going to be dominated by the winner of a [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] some decisions and conditioning the universal prior on agreement with those decisions (see here). I have argued that the behavior of the result on new decisions is going to be dominated by the winner of a [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Formal Instructions &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/15/hazards/#comment-2</link>
		<dc:creator><![CDATA[Formal Instructions &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 22 Dec 2011 01:00:08 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=23#comment-2</guid>
		<description><![CDATA[[...] We&#8217;ve maybe gotten some leverage on the first parts (though right now the difficulties here loom pretty large), which involve precisely defining certain concepts for an AI, but it isn&#8217;t [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] We&#8217;ve maybe gotten some leverage on the first parts (though right now the difficulties here loom pretty large), which involve precisely defining certain concepts for an AI, but it isn&#8217;t [&#8230;]</p>
]]></content:encoded>
	</item>
</channel>
</rss>
