Driving fast in the counterfactual loop

An allegory

Consider a human controlling a very fast car on a busy street using counterfactual oversight.

The car is perfectly capable of driving safely. But what happens in the 1% of cases where the car decides to pause and ask the human to review its proposed behavior, or to suggest an action?

Without some further precautions, the car is liable to immediately crash, and so the human won’t be able to provide any useful oversight at all. And that means that in the 99% of cases where the robot doesn’t ask the human for feedback, it won’t do anything useful.

Foreseeing this outcome, the human may install a backup system to drive the car while the first system is suspended. Unfortunately this doesn’t fix the problem. If the first system pauses, then the backup could spring into action. But if it also pauses, then the car will crash. And so the second system won’t do anything useful if the first system pauses. And so the first system won’t do anything useful.

As far as I can tell, no collection of counterfactually supervised systems can drive a car that contains the overseer.

Of course that’s not a big problem. The overseer just shouldn’t be in the car. If we can’t arrange that, then we should find some other way to control the car.

Acting immediately

One solution would be for the robot to always act on the basis of the feedback it expects to receive, even while it is waiting on that feedback.

This leads to undetermined behavior. The car could consistently reason: “if I don’t crash, then the human will tell me not to crash.” But it could just as well reason: “if I do crash, then the human won’t tell me anything.”

Which equilibrium is chosen depends on the details of the situation. We could take precautions to try to make sure that the right equilibrium is chosen. But at the end of the day we want to build systems that robustly do the right thing. From that perspective, I would judge this solution as “profoundly unsatisfying.”

Really, you don’t want the overseer in the car. Acting immediately avoids the part where you crash 1% of the time, but it doesn’t avoid the instability caused by having the overseer in the car.

A less allegorical allegory

Consider ten billion humans, controlling trillions of robots via counterfactual oversight.

These robots are doing very complex tasks, and the world is moving much faster than an unaided human could hope to understand. The only way that the humans can fend for themselves is by relying on AI assistants — and the only way that they can provide meaningful oversight of those systems is by getting help from still more AI assistants.

In this world there are likely to be some fully autonomous superhuman systems; if the humans didn’t have access to helpful AI assistants, these fully autonomous systems would likely take over. And even without this adversarial dynamic, the humans would likely be in serious trouble if they found themselves alone in a highly automated world.

This situation closely resembles the human driving fast, and I suspect the outcome would not be much better.

If all of the counterfactually supervised robots simultaneously decided to ask for feedback, the humans would be up a creek without a paddle. They would be asked to evaluate a bunch of complex decisions before any AI system could do anything to help them. They would try to turn to AI assistants, but in response they would just be asked to evaluate slightly less complex decisions… This explicit bootstrapping might take many iterations before it arrived at decisions so simple that humans could evaluate them without assistance.

In the meantime, the world would continue moving at a breakneck pace that the humans are not equipped to deal with. With normal infrastructure crippled, fully autonomous systems may be able to steal massive amounts of hardware, and to replace the overseers of many counterfactually overseen systems. The threat of seizing control would itself distort the behavior of many of these systems (since they now expect that, when the time comes, their oversight might be provided by an attacker rather than by the current overseer).

Human overseers would be forced to scramble, making rapid decisions and reducing their reliance on increasingly unreliable AI assistants. As a result, the performance of systems under human control would deteriorate, and they would be increasingly unable to help humans keep up even when they did operate. All of these problems would feed on each other, leading to general chaos and instability.

Faced with this hypothetical, the entire system of counterfactually overseen robots could fall apart, just as they would be unable to prevent the car from crashing. Only this time there is no one else to provide oversight, because the entire world is caught up in the snafu.

In order for things to go well, the world needs to be basically OK even if all of the counterfactually supervised robots decided to take the day off at once. If it’s not, then “functioning society” is at best a metastable equilibrium. But if it is, then counterfactual oversight seems to be at best a supplement to however we solve the control problem for fully autonomous machines.

Upshot

This scenario is somewhat far-fetched, and there are many practical remedies that could probably avoid the colorful catastrophe described above. But I think these concerns illustrate some instability and brittleness inherent in counterfactual oversight, and I think that would be a real problem.

The problem is entirely due to the peculiar structure of counterfactual oversight. A more traditional approach, in which we do engineering and training in advance, would completely avoid it. But from a scalability perspective the traditional approach is incomplete/underspecified, and it’s not clear that we can fill in these details without something like counterfactual oversight.

Fortunately, I think that we can get the best of both worlds, and I think that doing it right may help resolve many other concerns about robustness (including this other exotic failure mode). I’ll write about this very soon.

Advertisement

2 thoughts on “Driving fast in the counterfactual loop

  1. Pingback: Cause prioritization for downside-focused value systems – Foundational Research Institute

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s