Witnessing the ongoing discussion about how quantum theory should be interpreted, and the strong opinions and sometimes even dogmatic arguments, I decided to write a series of blog posts that will try to discuss the issue of interpretation as objectively as I possibly can. I will not specifically try to compare the different mainstream interpretation with each other, but rather explore the requirement of an interpretation at all and the possibility of answering the same fundamental questions using strong scientific rigor instead.
A scientific theory is usually defined as consisting of a mathematical apparatus that allows to perform calculations of predictive nature, and a layer of interpretational glue that connects the resulting numbers with measurements that we can actually perform. The separation of measurement and prediction works very well for all classical theories, where observer and experiment can be regarded as entirely separate entities. Quantum theory however makes a clean cut between observer and the observed experiment impossible, because after the experiment the two subsystems are interwoven in a very fundamental and complicated way, even if spatially separated. The nonlocal entanglement of the quantum state space does not allow us to use the approximation of objectivity anymore.
Understanding this problem, there are two main approaches of dealing with it. The older one insists on the classical separation and is willing to live with the necessary consequences. The Copenhagen interpretation introduces the Heisenberg cut between quantum and classical domains to recover the notion of an objective observer that can make classical statements about the measurement outcome. And with that cut we also get the interpretational glue back that relates mathematics with measurement results. This happens in the form of the well known measurement postulate which includes the Born rule describing the statistical outcome of a measurement.
The approach has several drawbacks however. Firstly, the location of the Heisenberg cut is more or less arbitrary as long as the observer and the system are well distinguishable, but becomes impossible as soon as this is not the case anymore. Often this does not pose a problem, but it is still a shortcoming as it keeps us from understanding certain realizable situations. Secondly, the Copenhagen and related interpretations leave us entirely in the dark as to what precisely happens during a measurement. Still, the Copenhagen interpretation is fundamentally scientific, as it focuses on measurements and predictions only, and does not take into account what is not observable.
The other main approach to the problem of observation takes the alternative route. Instead of introducing a cut, everything is taken into account. Experiment and measurement device become one system, which is itself a part of the largest system, the universe. It is then only consequential to assume the time evolution of undisturbed quantum systems as formulated in the Copenhagen interpretation, the Schrödinger equation, as the evolution law for the universe. Within this approach, all predictions and results must emerge only from the properties of the evolving system, as there is no external observer that can measure anything, and no classical measurement device either. The time evolution would also be fully deterministic and the randomness of the measurement outcome could also be regarded as an emergent property.
So when Hugh Everett III came up with his many worlds or relative state interpretation, he did really not at all want to create an interpretation in the sense of the Copenhagen interpretation, namely as a layer of translation between math and measurement. Rather, he wanted to create a scientific theory of emergence, where all results are derived as inherent properties of the system itself. And he was willing to accept all the consequences it brought, because the approach was rigorously scientific and only the logical consequence of avoiding the artificial Heisenberg cut.
Unfortunately, not everything worked out as well as this approach had been promising. Of course, the most famous consequence is the existence of arbitrarily many worlds containing observers that have seen any possible experimental outcome. While this is philosophically hard to accept for some, it surely is only an acceptable consequence if the other results work out correctly. And these results ought to be the precise statements of the measurement postulate of the Copenhagen interpretation, because those are experimentally verified.
However, while the many worlds theory gives a reasonably good explanation for the state collapse, it fails to give the right statistics. There has been some criticism regarding the collapse too, but more importantly it is generally agreed that the Born rule does not come out of the relative state theory unless extra postulates are added. Decoherence theory, which incorporates the environment to move coherence away from the experiment, or more recent attempts to use psychologically founded counting mechanisms for calculating the relative outcome probabilities, have not been generally convincing enough to consider the issues of the theory be solved. And adding postulates of course spoils the initial idea of having an actual theory of emergence.
So where does this leave us? We have a practical approach that works most of the time, but hides some possibly important features and mechanisms from us. And we have a holistic approach that stands on a beautiful theoretical idea, but fails to deliver the right results and comes with some curious side effects.
The question that I will explore in the following articles is what Everett’s approach has to do with the relationship between simulation and reality, and whether something that he and others have potentially overlooked could lead to a new theory with better results. And I promise, I’ll have a few surprises for you!