Suppose that I try to define a human’s decision process by observing some decisions and conditioning the universal prior on agreement with those decisions (see here). I have argued that the behavior of the result on new decisions is going to be dominated by the winner of a certain simulation arms race–most of the mass of the universal prior will be spread out amongst simulations spread throughout our universe, and whoever controls those simulations determines the resulting posterior.
This state of affairs is pretty much terrible, and in general the obstruction seems so bad that we shouldn’t ever try to specify anything formally by conditioning the universal distribution and hoping to get a pointer to our universe.
The problem is that describing an event as “that thing in the universe which starts off looking like this…” is dangerous. Instead, we would really like to specify a human’s decision process by saying “you arrange all of these atoms like so, and then you apply these laws of physics…” (This would also deal with the problems with specifying counterfactuals, although those seem to be less severe.)
This is likely to lead to a much larger description, but one which can’t be nearly so easily controlled by a simulator. Fortunately, though the resulting description is longer it is still a very good compression of the human’s behavior, and it also has the virtue of being computationally inexpensive. So we may hope to pinpoint this description by using a prior which weights explanations not by the Kolmogorov complexity, but by some different complexity measure which favors computationally inexpensive simulations.