Suppose we found a black box which purported to be a halting oracle. You feed the box an input, and it outputs 0 or 1. As it turns out, we couldn’t verify such a claim. Nevertheless, we could tell that such a box was pretty magical. Exactly how much could we tell? That is, if you ask the box a question of the form “Does Turing machine M halt?” when can you verify that its answer is accurate? Note that, in order to do this verification, you may be able to ask the box some further questions. In general, you could use halting queries to make the box simulate a deterministic prover’s role in some interactive proof protocol (i.e., does the machine that finds the lexicographically first valid prover strategy and then halts iff the prover’s strategy outputs 1 at this stage, halt?)
If all you care about is computability, you can’t do anything. If you could learn something interesting from an untrusted halting oracle, you could have just looped through all strings until you found one that convinced you to trust it. The interestingness of the situation comes from a computational complexity perspective.
[Disclaimer: I don’t think this is useful.]
There are a few parameters that affect the answer:
- Can we make classical or quantum queries to the box? This actually ends up not mattering, but that fact surprised me, and the reasons aren’t simple.
- Do we care about being completely certain when making claims about the box, or is pretty certain good enough?
- Do the box’s answers depend on other things that have happened in the universe / other queries made to the box, or are we guaranteed that the box would always answer the same question the same way?
Here are the main results:
1. In polynomial time, we can verify any claims which can be proven in polynomial time.
For example, if we ask the box “Does the Riemann Hypothesis have a proof of length < 1 billion?” we can verify an answer of “yes” by asking “Does it have a proof of length < 1 billion that starts with ‘a’? No? What about ‘b’? Yes? Well, does it have a proof that starts with ‘ba’? No? What about ‘bb’? What about ‘bc’?” and so on. This is pretty uninteresting–either we get out a contradiction, or we get out a proof.
2. The box’s claim about the result of any computation which takes a polynomial amount of space (for example, determining who wins from a given position in chess) can be verified in polynomial time, with only a negligible probability of erroneously believing a false claim.
This is a rather difficult proof. It follows from the fact that IP = PSPACE, a celebrated result in complexity theory (which says that it is possible to interactively prove in polynomial time anything which can be verified in polynomial space.) The key insight is that if someone claims that a sentence is true both for X=0 and X=1, I can efficiently test this claim by defining a polynomial which encodes the truth of the original sentence, asking them to tell me the value of that polynomial on some parametrized curve passing through X=0 and X=1, and then verifying the correctness of the polynomial representation at a random point along that curve. This works because, if two polynomials differ at any one point (so, if the polynomial representation is unfaithful either at 0 or 1) then those polynomials differ at almost every point (so I only have to check one point, to find the prover’s lie with high probability). Also, it’s important to note that polynomial space computation corresponds exactly to quantified boolean formulas of polynomial size.
3. Suppose the box’s response to a query depends only on that query, and not on the other queries that have been made to the box / other stuff that has happened in the universe. Then you can test the box’s claims about any computation which takes exponential time, with only a negligible probability of erroneously believing a false claim. In fact, you can test any claim which could be proven by an exponentially long proof.
This is a still more difficult proof. It follows either from the PCP theorem or from the proof that MIP = NEXP (Which says that two provers who can’t communicate with each other can convince you in polynomial time of anything you could verify yourself in exponential time.) In fact, the PCP theorem says that after a constant number of queries to the box, you can verify its claims with only an arbitrarily small constant probability of being mistakenly convinced by false claims. This is a very nice result, and one of the most important in theoretical computer science, but I definitely can’t do it justice here.
(Well: you can express the satisfiability of an exponentially large proof checker as an exponential system of polynomial equations. If you want to verify an arbitrary list of polynomial equations, you can verify just a constant number of random linear combinations of those equalities. By using indirection, and proving things about the behavior of proof checkers (who are themselves checking proofs about still other proof checkers…) rather than attacking the statement you care about directly, you can reduce the amount of randomness required to validate such a proof. If you iterate this amplification step, you can get down to the point where the checker sends the prover only logarithmically much stuff, and the prover sends back only constant stuff, in order to check a proof of linear size, which is pretty wild.)
4. You can verify a claim made by the box with high probability if and only if that claim has an exponentially long proof.
This is pretty straightforward. In polynomial time you can only access exponentially many of the box’s possible answers (if you only have time to write down N bits, there are only 2^N possible questions you could ask). If you can check the box’s claims about something, then the list of its answers to all exponentially many questions constitutes a proof.
Of course, for anything that can’t be proven in exponential time, a similar argument shows that you don’t care about the answer, even in a world with a halting oracle. So, the limitations to our ability to recognize halting oracles don’t seem too severe.