A Quantum Of Theory

Exploring new paths in quantum physics

Tag: Measurement

Quantum Theory – A view from the inside Part I

The history of science has taught us many things, among them that asking new questions often leads to new insight. Often, these new questions had not been asked before because they seemed to be too philosophical, unanswerable or even mostly unscientific. Here, I would like to confront you with a question that, at a first glance, might seem to fit into these categories. Nevertheless, I will show that discussing this question, specifically applied to quantum theory, leads to deep insight.

In the computer age we have grown very familiar with the concept of simulation. We can simulate practically anything we have understood physically, and we do that for very complicated and large systems like climate models of our planet. Of course, we are using approximations to reality so that our computers can handle the complexity. This, however, is a limitation that we can easily imagine not to exist. The concept of simulation remains the same, even if performed on a hypothetical machine without any practical restrictions.

We could think of any consistent set of mathematical rules and simulate it on a computer. In some sense, we would be creating our own universes with the rules that we make up. Some of these simulations might be just complex enough to allow for an internal observer to evolve, an individual that would have an inside view of our simulation. And if we had the means of communicating with him, we could ask him what he is observing.

We will possibly never get to the scientific sophistication that would allow this sort of real experiment, so what is the point of proposing it? The universe of our hypothetical observer is purely mathematical, a list of rules and an initial state, not more. The reality perceived by him must emerge in some way from the mathematical rules. Surely some aspects of his observation will be highly subjective, like the perception of color, taste or anything that just developed by chance without any profound direct connection to material reality as perceived by him. But other aspects of his observation will not be so subjective, but shared by all other hypothetical observers in the same simulated universe.

So, the question I would like to ask is “How does reality as shared by all possible observers emerge from the mathematical rules that describe the universe these observers inhabit?”. Maybe I have already convinced you that the question is not so esoteric after all. But quite certainly not, that it is even remotely possible to answer it. How would one distinguish objective features from subjective ones? And would we not have to know about all the emergent structures of the simulated universe first, like atoms and molecules or even brains?

I do share the above concerns, but I can also offer a way to circumvent them entirely. Let us assume that our virtual observer is not just any observer, but in fact a physicist who tries to formulate his own mathematical theory of his perceived reality. If he is a good scientist, his theories will only include those aspects of his observation shared by all other observers, and if he is successful his final theory of all things he can observe will be a perfect mathematical description of the objective emergent reality in the virtual universe. This is an extremely helpful assumption, because it allows us to actually talk about mathematical structure instead of a fuzzy and partly psychological concept. With this we can reformulate the fundamental question to “What mathematical model does a virtual observer use to describe his perceived reality?”. This formulation sounds much more reasonable and there is some hope that we may find a way to mathematically deduce the emergence of this internal view from the mathematical structure of the universe we simulate.

Does quantum theory have to be interpreted?

Witnessing the ongoing discussion about how quantum theory should be interpreted, and the strong opinions and sometimes even dogmatic arguments, I decided to write a series of blog posts that will try to discuss the issue of interpretation as objectively as I possibly can. I will not specifically try to compare the different mainstream interpretation with each other, but rather explore the requirement of an interpretation at all and the possibility of answering the same fundamental questions using strong scientific rigor instead.

A scientific theory is usually defined as consisting of a mathematical apparatus that allows to perform calculations of predictive nature, and a layer of interpretational glue that connects the resulting numbers with measurements that we can actually perform. The separation of measurement and prediction works very well for all classical theories, where observer and experiment can be regarded as entirely separate entities. Quantum theory however makes a clean cut between observer and the observed experiment impossible, because after the experiment the two subsystems are interwoven in a very fundamental and complicated way, even if spatially separated. The nonlocal entanglement of the quantum state space does not allow us to use the approximation of objectivity anymore.

Understanding this problem, there are two main approaches of dealing with it. The older one insists on the classical separation and is willing to live with the necessary consequences. The Copenhagen interpretation introduces the Heisenberg cut between quantum and classical domains to recover the notion of an objective observer that can make classical statements about the measurement outcome. And with that cut we also get the interpretational glue back that relates mathematics with measurement results. This happens in the form of the well known measurement postulate which includes the Born rule describing the statistical outcome of a measurement.

The approach has several drawbacks however. Firstly, the location of the Heisenberg cut is more or less arbitrary as long as the observer and the system are well distinguishable, but becomes impossible as soon as this is not the case anymore. Often this does not pose a problem, but it is still a shortcoming as it keeps us from understanding certain realizable situations. Secondly, the Copenhagen and related interpretations leave us entirely in the dark as to what precisely happens during a measurement. Still, the Copenhagen interpretation is fundamentally scientific, as it focuses on measurements and predictions only, and does not take into account what is not observable.

The other main approach to the problem of observation takes the alternative route. Instead of introducing a cut, everything is taken into account. Experiment and measurement device become one system, which is itself a part of the largest system, the universe. It is then only consequential to assume the time evolution of undisturbed quantum systems as formulated in the Copenhagen interpretation, the Schrödinger equation, as the evolution law for the universe. Within this approach, all predictions and results must emerge only from the properties of the evolving system, as there is no external observer that can measure anything, and no classical measurement device either. The time evolution would also be fully deterministic and the randomness of the measurement outcome could also be regarded as an emergent property.

So when Hugh Everett III came up with his many worlds or relative state interpretation, he did really not at all want to create an interpretation in the sense of the Copenhagen interpretation, namely as a layer of translation between math and measurement. Rather, he wanted to create a scientific theory of emergence, where all results are derived as inherent properties of the system itself. And he was willing to accept all the consequences it brought, because the approach was rigorously scientific and only the logical consequence of avoiding the artificial Heisenberg cut.

Unfortunately, not everything worked out as well as this approach had been promising. Of course, the most famous consequence is the existence of arbitrarily many worlds containing observers that have seen any possible experimental outcome. While this is philosophically hard to accept for some, it surely is only an acceptable consequence if the other results work out correctly. And these results ought to be the precise statements of the measurement postulate of the Copenhagen interpretation, because those are experimentally verified.

However, while the many worlds theory gives a reasonably good explanation for the state collapse, it fails to give the right statistics. There has been some criticism regarding the collapse too, but more importantly it is generally agreed that the Born rule does not come out of the relative state theory unless extra postulates are added. Decoherence theory, which incorporates the environment to move coherence away from the experiment, or more recent attempts to use psychologically founded counting mechanisms for calculating the relative outcome probabilities, have not been generally convincing enough to consider the issues of the theory be solved. And adding postulates of course spoils the initial idea of having an actual theory of emergence.

So where does this leave us? We have a practical approach that works most of the time, but hides some possibly important features and mechanisms from us. And we have a holistic approach that stands on a beautiful theoretical idea, but fails to deliver the right results and comes with some curious side effects.
The question that I will explore in the following articles is what Everett’s approach has to do with the relationship between simulation and reality, and whether something that he and others have potentially overlooked could lead to a new theory with better results. And I promise, I’ll have a few surprises for you!