<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	
	>
<channel>
	<title>Comments on: Specifying Humans Formally (Using an Oracle for Physics)</title>
	<atom:link href="http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/feed/" rel="self" type="application/rss+xml" />
	<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/</link>
	<description>As advertised</description>
	<lastBuildDate>Wed, 17 Dec 2014 04:51:36 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
	<item>
		<title>By: Specifying a human precisely (reprise) &#124; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/#comment-145</link>
		<dc:creator><![CDATA[Specifying a human precisely (reprise) &#124; Ordinary Ideas]]></dc:creator>
		<pubDate>Sun, 24 Aug 2014 21:43:32 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=15#comment-145</guid>
		<description><![CDATA[[&#8230;] have written about this issue in the past; in this post I&#8217;ll outline a slightly improved scheme (now with 100% fewer faraday [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] have written about this issue in the past; in this post I&#8217;ll outline a slightly improved scheme (now with 100% fewer faraday [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Solomonoff Induction and Simulations &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/#comment-67</link>
		<dc:creator><![CDATA[Solomonoff Induction and Simulations &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 24 May 2012 16:56:29 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=15#comment-67</guid>
		<description><![CDATA[[...] want to talk about a counterfactual world, or to talk about the behavior of a system which we think reflects our values, or to express our intentions by reference to a model of our own behavior. One way we might do this [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] want to talk about a counterfactual world, or to talk about the behavior of a system which we think reflects our values, or to express our intentions by reference to a model of our own behavior. One way we might do this [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: paulfchristiano</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/#comment-7</link>
		<dc:creator><![CDATA[paulfchristiano]]></dc:creator>
		<pubDate>Thu, 22 Dec 2011 07:17:38 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=15#comment-7</guid>
		<description><![CDATA[The goal is to get access to the function &quot;What this human in this box would say, if you presented it with stimulus X.&quot; No idealization, and no concern for how the function is implemented. In particular, the AI may not even have to use the function itself--it may reason about the function&#039;s behavior, perhaps even by observing the behavior of humans. In this case the mathematical definition is just a handle to get access to a particular concept in the AI&#039;s ontology.

A separate issue is &quot;Given a thing that responds to stimuli in this way, try to extract some abstract features of interest.&quot;]]></description>
		<content:encoded><![CDATA[<p>The goal is to get access to the function &#8220;What this human in this box would say, if you presented it with stimulus X.&#8221; No idealization, and no concern for how the function is implemented. In particular, the AI may not even have to use the function itself&#8211;it may reason about the function&#8217;s behavior, perhaps even by observing the behavior of humans. In this case the mathematical definition is just a handle to get access to a particular concept in the AI&#8217;s ontology.</p>
<p>A separate issue is &#8220;Given a thing that responds to stimuli in this way, try to extract some abstract features of interest.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mitchell Porter</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/#comment-6</link>
		<dc:creator><![CDATA[Mitchell Porter]]></dc:creator>
		<pubDate>Thu, 22 Dec 2011 05:22:42 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=15#comment-6</guid>
		<description><![CDATA[This is an important and interesting topic (for people concerned with Friendly AI). However, there are definitely issues with the procedure you suggest. 

A human in a box, communicating through a keyboard, is going to offer a strongly biased picture of how the human brain works. The use of heuristics by the human decision procedure is undoubtedly context-dependent, and this is a highly artificial context. Important chunks of the human utility function may never be called upon, and so may be overlooked by a simulation. 

Also, if we are trying to extract a generic human utility function, relying on one person may lead to overfitting. What we get here is surely much much more complicated than what we ultimately want, if the aim is to extract a human-friendly value system. We don&#039;t want an AI to believe that all its choices need to be measured against the preferences of someone who was stuck in a box once, possibly having a bad day, and whose brain was full of all sorts of idiosyncrasies and irrelevant microphysical complexities. 

So ultimately you *do* have to solve the arcane problems like identifying the &quot;internal mechanism our brains use for giving our intentions control over motor function&quot;. Or perhaps you personally don&#039;t have to solve them, but your algorithm for determining the nature of the human decision procedure will be performing an equivalent analysis. &quot;Overfitting&quot; is a nice familiar way to pose the question: In inferring the human decision procedure, by observing and simulating human beings, how do we avoid overfitting? Answering *that* question should take you a long way. 

If the objective is just to simulate a particular human being... again, a microphysically exact physical simulation would not be the simplest way to simulate a person. It might be one of the simplest functions to ostensively *specify* (&quot;simulate what&#039;s happening in that box&quot;), but it would be full of unnecessary details about ions. 

On a practical level, an efficient way to develop a whole-brain model may involve repeated high-resolution fMRI, and the progressive development of finite-state machine models for the voxels, in the context of interactive experiments. That is, you&#039;ll be modeling the brain as a lattice of finite state machines coupled to their neighbors. You get the data from the fMRI, and there will be a protocol of interaction with the subject designed to reveal the dynamics in ever-greater detail, until diminishing returns set in.]]></description>
		<content:encoded><![CDATA[<p>This is an important and interesting topic (for people concerned with Friendly AI). However, there are definitely issues with the procedure you suggest. </p>
<p>A human in a box, communicating through a keyboard, is going to offer a strongly biased picture of how the human brain works. The use of heuristics by the human decision procedure is undoubtedly context-dependent, and this is a highly artificial context. Important chunks of the human utility function may never be called upon, and so may be overlooked by a simulation. </p>
<p>Also, if we are trying to extract a generic human utility function, relying on one person may lead to overfitting. What we get here is surely much much more complicated than what we ultimately want, if the aim is to extract a human-friendly value system. We don&#8217;t want an AI to believe that all its choices need to be measured against the preferences of someone who was stuck in a box once, possibly having a bad day, and whose brain was full of all sorts of idiosyncrasies and irrelevant microphysical complexities. </p>
<p>So ultimately you *do* have to solve the arcane problems like identifying the &#8220;internal mechanism our brains use for giving our intentions control over motor function&#8221;. Or perhaps you personally don&#8217;t have to solve them, but your algorithm for determining the nature of the human decision procedure will be performing an equivalent analysis. &#8220;Overfitting&#8221; is a nice familiar way to pose the question: In inferring the human decision procedure, by observing and simulating human beings, how do we avoid overfitting? Answering *that* question should take you a long way. </p>
<p>If the objective is just to simulate a particular human being&#8230; again, a microphysically exact physical simulation would not be the simplest way to simulate a person. It might be one of the simplest functions to ostensively *specify* (&#8220;simulate what&#8217;s happening in that box&#8221;), but it would be full of unnecessary details about ions. </p>
<p>On a practical level, an efficient way to develop a whole-brain model may involve repeated high-resolution fMRI, and the progressive development of finite-state machine models for the voxels, in the context of interactive experiments. That is, you&#8217;ll be modeling the brain as a lattice of finite state machines coupled to their neighbors. You get the data from the fMRI, and there will be a protocol of interaction with the subject designed to reveal the dynamics in ever-greater detail, until diminishing returns set in.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Avoiding Simulation Warfare with Bounded Complexity Measures &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/#comment-4</link>
		<dc:creator><![CDATA[Avoiding Simulation Warfare with Bounded Complexity Measures &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 22 Dec 2011 01:36:12 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=15#comment-4</guid>
		<description><![CDATA[[...] some decisions and conditioning the universal prior on agreement with those decisions (see here). I have argued that the behavior of the result on new decisions is going to be dominated by the [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] some decisions and conditioning the universal prior on agreement with those decisions (see here). I have argued that the behavior of the result on new decisions is going to be dominated by the [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Hazards &#171; Ordinary Ideas</title>
		<link>http://ordinaryideas.wordpress.com/2011/12/14/specifying-humans-formally-using-an-oracle-for-physics/#comment-3</link>
		<dc:creator><![CDATA[Hazards &#171; Ordinary Ideas]]></dc:creator>
		<pubDate>Thu, 22 Dec 2011 01:15:33 +0000</pubDate>
		<guid isPermaLink="false">http://ordinaryideas.wordpress.com/?p=15#comment-3</guid>
		<description><![CDATA[[...] have described a candidate scheme for mathematically pinpointing the human decision process, by conditioning the [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] have described a candidate scheme for mathematically pinpointing the human decision process, by conditioning the [&#8230;]</p>
]]></content:encoded>
	</item>
</channel>
</rss>
