Confronting Gödelian difficulties (reprise)

My current attitude towards the Löbian obstacle might be best described as “just live with it.” In this post I’ll outline this view briefly, and try to communicate the underlying intuitions.

To show I’m being a good sport about it, I’ll also provide a new candidate reflection principle. But I’m not going to hang my hat on it. At the end of the day, my best guess for the right answer remains “live with it,” and if we can find a cute trick to avoid the problem I think it’s gravy. Continue reading

Challenges for extrapolation

My current preferred formalization of extrapolation of an agent’s preferences rests on imagining what would happen if that agent was provided with an idealized environment in which it could undergo an extensive process of reflection. It is clear that this is not a completely satisfactory account, though there is uncertainty about whether it is “good enough” for the intended use case.

One crisp difficulty is the following: this approach relies completely on the agent wanting you to know its extrapolated preferences.

Continue reading

Specifying “enlightened judgment” precisely (reprise)

Suppose that I have in hand a perfect model of my decision-making process, and I am interested in using this to define what I would believe, want, or do “upon reflection.” That is, in general I can use this model to define my current best guess as to the answer, but I might also be interested in talking about my “enlightened judgment,” if I knew all of the facts and considered all of the arguments and were more the person I wish I were and so on. Can we give a satisfactory formal definition of my enlightened judgment in terms of this literal model of my decision-making process?

Continue reading

Specifying a human precisely (reprise)

Suppose I want to provide a completely precise specification of “me,” or rather of the input/output behavior that I implement. How can I do this? I might be interested in this problem, for example, because it appears to be a primary difficulty in providing a precise specification of “maximize the extent to which I would approve of your decision upon reflection.” (I have suggested that we would be happy with a powerful AI that made decisions according to this maxim.)

I have written about this issue in the past; in this post I’ll outline a slightly improved scheme (now with 100% fewer faraday cages). The technical changes are relatively modest, but I’m also taking a somewhat different approach to the issue, and overall I think it seems much more like the kind of thing that could actually be done. I also want to take the opportunity to try to clarify and expand the exposition some, since I think that the amount of discussion and thought that this idea has gotten now vastly surpasses the amount of care that went into crafting the original exposition.

I welcome additional objections to this scheme. As usual I think the literal proposal laid out here is extremely unlikely to ever be used. However, finding problems with this proposal can still be useful for shedding light on the problem, and in particular on how difficult it is and where the difficulties lie.

Continue reading

Adversarial collaboration

Suppose that I have hired a group of employees who are much smarter than I am. For some tasks it may be easy to get useful work out of them: for example, if I am interested in finding a good layout for the components on a chip and can easily evaluate the quality of a proposed layout, I can simply solicit proposals, test the proposals, and award the employees according to the quality of their proposals.

However, for some kinds of tasks there may be fundamental problems with adopting this kind of incentive-based policy. For example, suppose I am interested in working with these employees to build an AI which is not only much smarter than any of us, but which will act autonomously in support of my values even when I can’t monitor its behavior. In cases like this, I will have to try something different. Continue reading