<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	
	>
<channel>
	<title>Comments on: A formalization of indirect normativity</title>
	<atom:link href="https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/feed/" rel="self" type="application/rss+xml" />
	<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/</link>
	<description>As advertised</description>
	<lastBuildDate>Wed, 17 Dec 2014 04:51:36 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
	<item>
		<title>By: Approval-seeking &#124; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-133</link>
		<dc:creator><![CDATA[Approval-seeking &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Mon, 21 Jul 2014 23:08:29 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-133</guid>
		<description><![CDATA[[&#8230;] described this proposal in a previous post; however, that post focused on technical details, and presented an implausible but extremely [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] described this proposal in a previous post; however, that post focused on technical details, and presented an implausible but extremely [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alexander Kruel &#183; Acausal wireheading?</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-114</link>
		<dc:creator><![CDATA[Alexander Kruel &#183; Acausal wireheading?]]></dc:creator>
		<pubDate>Thu, 25 Jul 2013 10:43:52 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-114</guid>
		<description><![CDATA[[&#8230;] Indirect Normativity [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] Indirect Normativity [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Solomonoff Induction and Simulations &#171; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-70</link>
		<dc:creator><![CDATA[Solomonoff Induction and Simulations &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 24 May 2012 16:56:37 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-70</guid>
		<description><![CDATA[[...] and then applying Solomonoff induction to pinpoint that continuation. In some earlier posts I have written about the following objection: Solomonoff induction applied to sequence of inputs [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] and then applying Solomonoff induction to pinpoint that continuation. In some earlier posts I have written about the following objection: Solomonoff induction applied to sequence of inputs [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: paulfchristiano</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-56</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Tue, 08 May 2012 07:15:16 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-56</guid>
		<description><![CDATA[The emulation in this scenario never gets run--we are just trying to get a mathematical handle on it, to describe an ideal which can be practically approximated or reasoned about. Note that the definition of the emulation also involves an astronomically expensive brute force search.]]></description>
		<content:encoded><![CDATA[<p>The emulation in this scenario never gets run&#8211;we are just trying to get a mathematical handle on it, to describe an ideal which can be practically approximated or reasoned about. Note that the definition of the emulation also involves an astronomically expensive brute force search.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: minddll</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-55</link>
		<dc:creator><![CDATA[minddll]]></dc:creator>
		<pubDate>Tue, 08 May 2012 06:48:40 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-55</guid>
		<description><![CDATA[I find it unlikely that one could practically emulate a brain on individual (as opposed to species-scale) level given only the genome and environmental input data with the technology available in the foreseeable future.

Theoretically it should be possible to construct a generic species-level brain emulation from the genome alone and then populate it with the individual&#039;s experiential data, thus effectively recreating said individual, yet in my opinion it would be much more economical to look for efficient representations of the functional and structural connectome, acquired directly by some appropriate means from the actual brain in question.]]></description>
		<content:encoded><![CDATA[<p>I find it unlikely that one could practically emulate a brain on individual (as opposed to species-scale) level given only the genome and environmental input data with the technology available in the foreseeable future.</p>
<p>Theoretically it should be possible to construct a generic species-level brain emulation from the genome alone and then populate it with the individual&#8217;s experiential data, thus effectively recreating said individual, yet in my opinion it would be much more economical to look for efficient representations of the functional and structural connectome, acquired directly by some appropriate means from the actual brain in question.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: paulfchristiano</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-54</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Mon, 07 May 2012 23:54:04 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-54</guid>
		<description><![CDATA[For example, a description of the physical dynamics during development and learning, a compressed version of the relevant parts of the genome, and a summary of all of the environmental inputs the human has been exposed to (the observation about rate of memory formation suggests that you may only need a few bits per second to capture the long-term effects of experience). 

Of course, you could also just directly compress the connectome plus whatever extra quantitative detail is needed (even using a very crude compression scheme), and this would probably also be &quot;much more compact,&quot; the above argument just suggests something about how much more simple it should get.]]></description>
		<content:encoded><![CDATA[<p>For example, a description of the physical dynamics during development and learning, a compressed version of the relevant parts of the genome, and a summary of all of the environmental inputs the human has been exposed to (the observation about rate of memory formation suggests that you may only need a few bits per second to capture the long-term effects of experience). </p>
<p>Of course, you could also just directly compress the connectome plus whatever extra quantitative detail is needed (even using a very crude compression scheme), and this would probably also be &#8220;much more compact,&#8221; the above argument just suggests something about how much more simple it should get.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Incomprehensible Utility Functions &#171; Ordinary Ideas</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-53</link>
		<dc:creator><![CDATA[Incomprehensible Utility Functions &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Mon, 07 May 2012 23:35:00 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-53</guid>
		<description><![CDATA[[...] my post on Indirect Normativity, I describe the definition of a very complicated utility function U. A U-maximizer may be able to [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] my post on Indirect Normativity, I describe the definition of a very complicated utility function U. A U-maximizer may be able to [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Noetic Jun</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-51</link>
		<dc:creator><![CDATA[Noetic Jun]]></dc:creator>
		<pubDate>Mon, 30 Apr 2012 00:08:52 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-51</guid>
		<description><![CDATA[Can you give a rough description of what this &quot;much more compact specification&quot; of an individual human brain would look like?]]></description>
		<content:encoded><![CDATA[<p>Can you give a rough description of what this &#8220;much more compact specification&#8221; of an individual human brain would look like?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Wei Dai</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-50</link>
		<dc:creator><![CDATA[Wei Dai]]></dc:creator>
		<pubDate>Sun, 29 Apr 2012 12:58:12 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-50</guid>
		<description><![CDATA[I don&#039;t see how a UDT agent would be able to quickly arrive at either of these beliefs starting from a formal definition of U. Do you think otherwise, or is the idea to supplement the formal definition with some hints to help out the AI?]]></description>
		<content:encoded><![CDATA[<p>I don&#8217;t see how a UDT agent would be able to quickly arrive at either of these beliefs starting from a formal definition of U. Do you think otherwise, or is the idea to supplement the formal definition with some hints to help out the AI?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: paulfchristiano</title>
		<link>https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/#comment-48</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Fri, 27 Apr 2012 23:26:15 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=121#comment-48</guid>
		<description><![CDATA[I agree that this is a tricky question; I think making an appropriate response is worthwhile, and will take more than a comment. 

Basically: if you believe that your environment is computationally simple, then you can look out at the environment and see humans, deduce that they are computationally simple, observe they have basically the correct I/O behavior, and then start to predict the features of U by observing and interacting with humans (or experimenting on). Conversely, if you believe that U is likely to reflect something like human volition, then you can quickly conclude that the instrumentally correct thing to do is to care about versions of yourself with lots of measure under the universal prior. But if you neither know anything about U, nor think your environment is computationally simple, the situation seems a little more complicated.]]></description>
		<content:encoded><![CDATA[<p>I agree that this is a tricky question; I think making an appropriate response is worthwhile, and will take more than a comment. </p>
<p>Basically: if you believe that your environment is computationally simple, then you can look out at the environment and see humans, deduce that they are computationally simple, observe they have basically the correct I/O behavior, and then start to predict the features of U by observing and interacting with humans (or experimenting on). Conversely, if you believe that U is likely to reflect something like human volition, then you can quickly conclude that the instrumentally correct thing to do is to care about versions of yourself with lots of measure under the universal prior. But if you neither know anything about U, nor think your environment is computationally simple, the situation seems a little more complicated.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
