The speed prior is an analog of the universal distribution which penalizes computationally expensive hypotheses. The speed prior is of theoretical interest, but it is also justified if we suspect that we are living in a simulation: computationally simpler hypotheses requires fewer resources and we should therefore expect them to be simulated more times (if constant resources are dedicated to several independent simulations, a simulation which is twice as expensive can only be run half as many times).
It has been argued that the speed prior implies that wave function collapse is probably real, because simulating other branches would be computationally expensive, and that we should not expect quantum computation to work. This strikes me as naive: we have practically smuggled in the hypothesis of a classical universe by introducing the speed prior (or at least by suggesting that it reflects reasonable beliefs for an observer being simulated).
If quantum mechanics is a good model for the “basement level” reality, in which any simulations may be run, then we should use a quantum speed prior rather than a classical speed prior, defined identically but using quantum Turing machines. The quantum speed prior doesn’t penalize MWI.
We might argue that, in a world where quantum computing works, a simulator would have something better to do with their quantum computational ability than simulate many interfering branches in MWI. Perhaps they could simulate one Copenhagen world in each branch of their quantum computer, thereby penalizing MWI vs. Copenhagen just as described by the speed prior. But that isn’t how quantum computation works: you can’t take exponentially many “branches” of a quantum computer and use each one to simulate an independent Copenhagen world, any more than you can take exponentially many computation paths of a randomized computation and use each one to simulate an independent Copenhagen world. At the end of the day you only give “substance” (whether causal or descriptive, or in any coherent account of observer-ness) to one of the worlds,
In fact, simulating a Copenhagen world on a quantum computer seems likely to be only slightly or no easier than simulating an MWI world on a quantum computer (modulo some difficult complexity-theoretic assumptions). This is not really surprising–quantum computers demonstrate an exponential speedup over classical computation on a very narrow range of problems, and simulating quantum systems is at the top of the list.
The underlying question, whether reality runs on classical or quantum probability, is difficult and certainly unsettled (modulo uncertainty about consciousness, both are very compelling accounts). But the speed prior doesn’t seem like a convincing reason to prefer a classical explanation, even if we expect that we are living in a simulation.
(In general, I think that when computational complexity is relevant for philosophical questions, quantum mechanics seems like the better bet for modeling efficiency. I don’t know of any situation where this distinction has mattered, and so I don’t think this observation has much importance.)