Entropy, Reversibility, and Uncomputation

Summary: Anything you can compute using an infinite amount of negentropy, time T, and space S, can be computed using only about S negentropy if you are willing to spend a little bit of extra space and time (S log T and T^1.5, for example, or S and 2^S * T). So future universes may be constrained by available space and time rather than negentropy, and the potential for computation may be exponentially larger than we would otherwise expect.

Informal Thermodynamics

Almost any reversible laws of physics appear to exhibit a “second law of thermodynamics,” to the extent that such a law has been observed by humans so far. If the laws of physics are reversible, then a set of possible worlds will evolve through time into new sets of equal size. Given nearly any ignorance about the state of a system and nearly any partition of the possible states, most worlds consistent with our original knowledge will evolve to lie in a rather large part of the partition rather than a vanishingly small one.

The notion of entropy can be introduced (again for any reversible laws) to model this phenomenon (statistical mechanics has a lot to say about this quantity under some additional assumptions about the laws, most of which won’t be relevant here). For example, for any reversible laws of physics there is a constraint on how much irreversible computation we can do: each time we try to erase the current value of a register we must necessarily introduce one bit of entropy to the environment (we have squeezed the possible worlds within the computer by a factor of 2, so must expand the possible worlds outside the computer by a factor of 2). Once there is no room in the environment for entropy, we can no longer erase the values of registers (entropy from the environment will leak into our machine) and irreversible computation becomes impossible.

All of this holds for general reversible systems, and I will somewhat boldly claim that observation has not suggested any stronger thermodynamic limitations on computing than those imposed by the reversibility of the laws of physics. (There are other restrictions coming from quantum gravity restrictions on maximum information density in spacetime, although here we are getting into physics I wouldn’t bet too confidently on and the restrictions are exponentially weaker for many settings of natural parameters.)

Reversible Computation

In reversible laws of physics there are no thermodynamic obstructions to doing very large amounts of reversible computation.

A computer only consumes negentropy (order in the environment which can accept ejected entropy) when it tries to erase a register. What if we never erase a register? For example, whenever we would have set a register C to hold the value (A AND B), we can instead set C’ = C XOR (A AND B), where C’ is the new value and C is the old value. This operation is called a Tofolli gate. Using reversible primitive operations we can describe a theory of “universal reversible computation” which mirrors the usual theory of universal computation.

In the usual setting, AND gates form the basis for universal computation. Interesting laws of physics support the implementation of AND gates as well as memory registers, and together with a source of negentropy which allows us to erase those memory units this is enough to construct a Turing machine (or any other universal computer).

In the reversible setting, Tofolli gates form the basis for universal reversible computation. Interesting reversible laws of physics support the implementation of Tofolli gates and memory registers, and this is enough to construct a reversible Turing machine (or any other universal reversible computer) even without a source of negentropy.

The reversible model is interesting only from a complexity-theoretic standpoint. Of course the reversible computer will also require some initially zero workspace, and this is best thought of as its own source of negentropy. The difference is that an irreversible computer is allowed to use a fresh batch of negentropy at each computational step, while a reversible computer has an initial endowment of negentropy.

Reversible Simulations of Irreversible Computation

Consider a human brain. Suppose its state can be described in S bits and can be updated in discrete steps. Suppose each update takes a constant amount of time per bit of state, and the bits can be computed in parallel. Simulating such a brain for T steps generally requires producing S * T bits of entropy–after each step the old state must be overwritten, producing S more bits of entropy. The same process can be implemented on a reversible computer, but now requiring S * T space–there is no way to reversibly remove intermediate scratchwork, and so the entire brain’s history must be preserved. (This is what nature does in our experience–our entire brain’s history is preserved, though obfuscated, in the heat given off by our brains.)

If we try and run a brain for 10^10000 steps, we will find that the universe doesn’t have enough negentropy to support the computation. A different way of looking at this is that the universe is reversible, but it doesn’t have 10^10000 bits in which to store such a long history.

Fortunately, there is a way around this problem called “uncomputing,” which can allow us to simulate a brain reversibly without storing the entire history. Here is an analogy in terms of Maxwell’s demon which motivated me to do the complexity theory that follows.

Maxwell’s Demon and Uncomputation

Consider a box with particles of two types, A and B. The box is divided into two sections, left and right, separated by a hole which can be effortlessly open or closed by an onlooking demon.

The box starts in some highly ordered state, say with the particles arranged in a neat lattice, all of the A’s on the left side and all of the B’s on the right, and is left to evolve. Naturally, the box quickly enters a very unordered state, in which the A’s and B’s are mixed evenly between the two sections. At this point the demon enters the scene, and wishes to restore order, returning all A’s to the left side and all B’s to the right side. He hopes that by doing so he can reduce entropy, defeating the second law of thermodynamics.

To do this, the demon plans to open the door between the sections whenever an A is about to pass from the right to the left, or whenever a B is about to pass from the left to the right, and to close it in all other situations. Eventually, this will have the desired effect.

Suppose the demon implements this strategy by measuring each particle as it approaches the door. It turns out that it is possible for the demon to perform these measurements and adjust the door appropriately, but he encounters an interesting problem: once the A’s are all on the left side of the box and the B’s are all on the right side, the demon is left with all of the measurements he took while restoring order. He cannot erase these measurements without producing entropy, and so the demon has not succeeded at defeating the second law of thermodynamics at all. He has merely succeeded in transferring entropy from the distribution of A’s and B’s to his own written record of the particle’s positions.

As it turns out, the demon’s problem is nearly insoluble. Without using knowledge of the initial configuration of the particles, nothing the demon does can reduce the (suitably defined) entropy.

But suppose the demon does know the original state of the box, and is able to run a simulation which is faster than the real evolution of particles in the box. Using this simulation, the demon can predict the trajectory of every particle without measuring and compute an appropriate sequence of “door open”s and “door close”s to restore order. Again, at the end of the process the demon has a mess on his hands: the entropy in the final state of his computation dwarfs the reduction in entropy he achieved!

But now the demon can reduce entropy by simply “uncomputing” the final configuration of the particles in the box. He simply runs exactly the same computation backwards, transforming from the messy final state to the highly ordered initial state!

Complexity Theory and Uncomputation

Let f be the function which maps a brain’s state at time t to the same brain’s state at time t+1. We want to produce some iterate f^(10000000000)(initial brain state) without producing much entropy.

Suppose that given X we could produce the pair (X, f^k(X)) [where f^k is the kth iterate of f] using space log(k) * S. Then given X, we can inductively produce the pair (X, f^2k(X)) using space log(2k) * S, as follows:

  • Starting with X, calculate X, f^k(X).
  • Starting with f^k(X), calculate f^k(X), f^2k(X). We now have the triple x, f^k(X), f^2k(X) stored in memory. (This step uses S more space than the calculation of f^k(X), because we are also storing x.)
  • Because we are on a reversible computer, we can run the computation X -> (X, f^k(X)) backwards. If we do this, we obtain the pair x, f^2k(X), as desired. All of the intermediate scratchwork has been erased, without producing any entropy!
Of course, given X we can compute (X, f(X)) using space S by simply running the irreversible computation.Thus we can compute the pair (X, f^k(X)) in space log(k) * S, for every k!
In general there is a complicated space of tradeoffs between time, space, and entropy. There are two main results of interest:
  • By taking 2^n steps at a time in the above process, we can compute f^k(X) using only  2^n * log(k) * S space and k^(1 + 1 / n) time. (I proved this during a recent flight, but it is fairly easy and therefore standard in the literature.)
  • By using a more complicated simulation procedure, we can compute f^k(X) using only S space and 2^S time. (See Lange and McKenzie 97)

This suggests that, if the universe has only a limited supply of negentropy but a very long lifetime, the amount of computation we can get done depends exponentially on the negentropy (and is therefore likely to be more limited by available time). For some particular, concrete tradeoffs:

  • Running a brain defined on 10^12 bits for 10^1000 steps can be done with about 10^18 bits of negentropy and 10^1300 time.
  • Running a civilization defined on 10^30 bits for 10^100000 steps can be done with about 10^50 bits of negentropy and 10^1100000 time.
  • Running a civilization defined on 10^50 bits for any amount of time can be done with about 10^50 bits of negentropy and 10^(10^50) time.
Advertisements

3 thoughts on “Entropy, Reversibility, and Uncomputation

  1. Pingback: Astronomical waste | Rational Altruist

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s