3 Nobel prize winners poopoo the walkers.
A great article from the Simons foundation came out back in June covering the walker research. These objects, a wave/particle association, are attracting attention because they have the capacity to self-excite themselves into quantized orbits and reproduce an increasing catalog of quantum behavior previously only observed in electrons but never in 'classical' systems. For a more in depth look at the walkers, the article does a good job.
The article interviews 3 Nobel prize winners.
Personally, it's about understanding.
The puzzling thing about QM is that it is a very successful theory and we have no idea why. We don't know where it comes from. QM, from a logical standpoint, really is structured as a set of axioms (the postulates of symmetries) and then a lot of recipes on how to make it all work. And it works really well, it is an observed theory complete with observed predictions (Higgs). What justifies the choice of symmetries is their result, they work. Epistemologically, the puzzling bit of it is that we don't know where those symmetries come from, we simply hypothesize them.
As pointed out in the article and comments, certain "working" physicists don't care about the 'why' at all. They don't need to, they don't want to. It is a toxic choice. To them it's philosophy.
Personally I only care about the why. Let's not forget that understanding the 'why' is a big motivation in physicists to begin with. That's why we do it.
Lack of Interpretation
I always cringe when people talk about the "Copenhagen interpretation". The 'Copenhagen' is not an interpretation. It is a formalism, with bits of stories and smiling cat interpretations bolted around it.
I will always remember Alain Aspect yelling at me as I was a wide eyed undergrad trying to make sense of QM and his 'spooky action at a distance' experiment. I needed to stop "interpreting" the formalism, it would lead me nowhere and that some people were very very happy NEVER interpreting the formalism.
In retrospect the reason is rather simple in that that if you try to 'interpret' the formalism with everyday categories you quickly start talking about magical stuff such as superimposed cats (/electrons) that exist in multiple states at the same time. It also leads to this superposition instantly disappearing, in a mysterious "wave packet collapse" that supposedly seems to happen instantly in the EPR class of experiments (see below).
The 'interpretation' of the formalism wasn't magical, it was rather simply 'not there'.
Objects for interpretation
What the walker framework does for me is equip me with tools, objects, categories and abstractions with which to think about QM. All of the sudden I have a picture in my mind. Not just a math formalism. And this picture helps me think clearly, it guides the intuition. Whether it is right or wrong is almost besides the point at this stage. I can write industrial grade code on multi-core machines at home (in java) and explore what the computational models say.
The mental picture and physical intuition that one derives from the walker experiment is indeed 'mind expanding'. For example it is rather simple to revisit superposition and decoherence in this framework, see here for an entry on the re-interpretation of these notions. In this approach the Schrodinger cat is always dead, superposition is a mind view, it never existed, it is replaced by 'chaotic intermittence' and the notions of coherence and decoherence take on a very definitive form in contrast to the magical wave collapse that is supposed to happen upon measurement and observation.
In this picture the wave collapse is a false category. It is replaced by a notion of intermittence between states (the particle can self excite into those states) mediated by the presence of chaotic dynamics in the wave/particle duo. In short, the particle exist in a sort of 'superposition', where there are many states and we just transition between states with probabilities that remind us of "transition probabilities" of QM. The difference with the classical superposition is that our particle are not in those several states *at the same time*. There is no superposition per se but rather intermittence between those states at different times. The walkers will self-quantize their orbits in discrete states, just like an electron does and they will oscillate between those states (the intermittence) just like QM particles are supposed to do albeit with everyone at the same time.
Reality really exists.
But I am honestly not too concerned about comments like this one. It is just going to take time. Prof Bush whose mathematical models provided the starting point for many simulations including ours put it very well in the article: "The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”
The walker framework is already taught by Prof Anderson and Brady with the mental pictures of the walkers to explain things like 'spin'. I wish I had had those classes as a undergrad instead of the "Copenhagen".
As a matter of taste, I am firmly convinced that the "Copenhagen interpretation" will fade eventually as a deeper understanding emerges, possibly informed by the walker picture. It will be seen as not an interpretation at all, but simply a formalism, "The Copenhagen Formalism".
But there is no magic.
Ontological Reality
It seems the objects required to describe the formalism (vectors in a Hilbert space) are just mathematical objects and that's it. They capture the language of statistical transitions in a matrix. They are abstractions. To assign ontological reality to these abstractions, for example to think of superposition as something 'real' is what leads us to magic in the first place. What the walker framework says is that indeed the layer at which classical QM is formulated (the state function) cannot be interpreted. It is a mathematical abstraction. 'Shut up and calculate' was indeed the right frame of mind to have. Today a lot of people think that the 'real' objects exist at a lower layer of reality than the 'standard model'. In this particular walker picture, QM would be a sort of an emergent *statistical* composite model.
The QM objects were not the 'real objects' we were looking for.
QM as emergent objects
Strictly speaking 'the standard model" needs to be emergent from any candidate underlying framework. In all generality whatever model you adopt as your underlying reality has to have the standard model somewhere downstream as an emergent property in its logical consequences. Whatever emerges from our axioms has to conform with QCD and the Standard Model. Period.
So the trick is to show that QM is an emergent phenomena. The behavior that emerges from our models and the experiments are QM analogs, but they are not the 'real thing'.
Computational chaos
The article mentioned the latest Couder experiment, the ones with the elastic force, this setup leads to chaotic path that shows intermittence and self-quantization of orbits of the walker. It is memory induced quantization. But it takes either the experiment or computers to 'observe' them. Today we try to characterize *how much chaos* we need to create the proper QM-like behavior.
Historical detour
The article did a great job with 'the history of physics' approach. I can project myself to this famous conference where the Copenhagen formalism won the day. I can see a young Louis De Broglie waving his arms about, talking about an inventive but unpractical wave/particle model and Bohr simply dropping his magical (and mathematically expedient) matrix formalism with Einstein choking on his bagel in the front row that, "God doesn't play dice". In hindsight, it is normal that this formalism would win the day simply because it was the only one that could predict things correctly. The Copenhagen model was a working model while the particle/wave one was a 'philosophical' model. Bohr et al, chose a more abstract starting point, sacrificed understanding in favor of 'working physics'. It was a faustian bargain (captured somewhere in a play) that I would personally take. Those in the article who lament the choices made are in turn a little naive. Only with today's powerful computers and the clever experiments, which were encountered by chance, can we make predictions and build a more fruitful mental picture.
In any case, it was not only the easy way (mathematically), it was also the only way (no computers) and it turned out to be the right way. What more do you want? An explanation as to why?
Of Philosophy and in praise of 'understanding'
So I will not disagree with the quotes in the article and their famous authors that this line of work while "mind-expanding" is clearly a philosophical endeavor. Indeed, personally it is also why I am attracted to it as well. I glamorize it as 'old school' natural philosophy. Someone's got to do it.
Those who point out that recreating the standard model is going to take a long time are probably right. Those who point out that it is futile since we already have said standard model are only partially right.
To me it is obvious that understanding QM in terms of wave/particle composite objects, probably different than these walkers and going through the rebuilding of the standard model as Leggett advises will yield new and valuable insights. To me clearing up the air around superposition and decoherence was already reward enough.
Local causality vs non-local correlations. Einstein-Podolski-Rosen in walkers
Finally, as pointed out in the comment section, it would be significant if the framework had something to say about the statistics of Bell inequalities. It has been one of my starting points on the walker research for the simple reason that the formalism was getting too abstract and I needed *something* to guide my intuition. Spooky action at a distance has been bothering me for about 20 years ever since I studied them under Aspect. The logical flow would then be to take 2 walkers let them interact as they drift apart and see if there are non-local correlations. It should be noted that the correlations build up over time, like in a memory, it is path dependent and it is not the all or nothing picture that was used for the hidden variable in the Bell theorem derivation. The variables are not set at birth but rather developed chaotically over time. Some correlation will build up and show up.
And to the walkers I say: God Speed.
The article interviews 3 Nobel prize winners.
Here is what t'Hooft had to say (from the article): "Personally, I think it has little to do with quantum mechanics,” said Gerard ’t Hooft, a Nobel Prize-winning particle physicist at Utrecht University in the Netherlands. He believes quantum theory is incomplete but dislikes pilot-wave theory.
Wilczek was just as encouraging: "Many working quantum physicists question the value of rebuilding their highly successful Standard Model from scratch. “I think the experiments are very clever and mind-expanding,” said Frank Wilczek, a professor of physics at MIT and a Nobel laureate, “but they take you only a few steps along what would have to be a very long road, going from a hypothetical classical underlying theory to the successful use of quantum mechanics as we know it.”
And finally: "Anthony Leggett, a professor of physics at the University of Illinois, Urbana-Champaign, and a Nobel laureate. “Whether one thinks this is worth a lot of time and effort is a matter of personal taste,” he added. “Personally, I don’t.”Personally, it's about understanding.
The puzzling thing about QM is that it is a very successful theory and we have no idea why. We don't know where it comes from. QM, from a logical standpoint, really is structured as a set of axioms (the postulates of symmetries) and then a lot of recipes on how to make it all work. And it works really well, it is an observed theory complete with observed predictions (Higgs). What justifies the choice of symmetries is their result, they work. Epistemologically, the puzzling bit of it is that we don't know where those symmetries come from, we simply hypothesize them.
As pointed out in the article and comments, certain "working" physicists don't care about the 'why' at all. They don't need to, they don't want to. It is a toxic choice. To them it's philosophy.
Personally I only care about the why. Let's not forget that understanding the 'why' is a big motivation in physicists to begin with. That's why we do it.
Lack of Interpretation
I always cringe when people talk about the "Copenhagen interpretation". The 'Copenhagen' is not an interpretation. It is a formalism, with bits of stories and smiling cat interpretations bolted around it.
I will always remember Alain Aspect yelling at me as I was a wide eyed undergrad trying to make sense of QM and his 'spooky action at a distance' experiment. I needed to stop "interpreting" the formalism, it would lead me nowhere and that some people were very very happy NEVER interpreting the formalism.
In retrospect the reason is rather simple in that that if you try to 'interpret' the formalism with everyday categories you quickly start talking about magical stuff such as superimposed cats (/electrons) that exist in multiple states at the same time. It also leads to this superposition instantly disappearing, in a mysterious "wave packet collapse" that supposedly seems to happen instantly in the EPR class of experiments (see below).
The 'interpretation' of the formalism wasn't magical, it was rather simply 'not there'.
Objects for interpretation
What the walker framework does for me is equip me with tools, objects, categories and abstractions with which to think about QM. All of the sudden I have a picture in my mind. Not just a math formalism. And this picture helps me think clearly, it guides the intuition. Whether it is right or wrong is almost besides the point at this stage. I can write industrial grade code on multi-core machines at home (in java) and explore what the computational models say.
The mental picture and physical intuition that one derives from the walker experiment is indeed 'mind expanding'. For example it is rather simple to revisit superposition and decoherence in this framework, see here for an entry on the re-interpretation of these notions. In this approach the Schrodinger cat is always dead, superposition is a mind view, it never existed, it is replaced by 'chaotic intermittence' and the notions of coherence and decoherence take on a very definitive form in contrast to the magical wave collapse that is supposed to happen upon measurement and observation.
In this picture the wave collapse is a false category. It is replaced by a notion of intermittence between states (the particle can self excite into those states) mediated by the presence of chaotic dynamics in the wave/particle duo. In short, the particle exist in a sort of 'superposition', where there are many states and we just transition between states with probabilities that remind us of "transition probabilities" of QM. The difference with the classical superposition is that our particle are not in those several states *at the same time*. There is no superposition per se but rather intermittence between those states at different times. The walkers will self-quantize their orbits in discrete states, just like an electron does and they will oscillate between those states (the intermittence) just like QM particles are supposed to do albeit with everyone at the same time.
Reality really exists.
As one commenter on the Simons article said "As stated by Bell’s theorem one has to abandon either locality or reality and it seems that for most physicists (me included) giving up on reality is the favored choice."
But I am honestly not too concerned about comments like this one. It is just going to take time. Prof Bush whose mathematical models provided the starting point for many simulations including ours put it very well in the article: "The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”
The walker framework is already taught by Prof Anderson and Brady with the mental pictures of the walkers to explain things like 'spin'. I wish I had had those classes as a undergrad instead of the "Copenhagen".
As a matter of taste, I am firmly convinced that the "Copenhagen interpretation" will fade eventually as a deeper understanding emerges, possibly informed by the walker picture. It will be seen as not an interpretation at all, but simply a formalism, "The Copenhagen Formalism".
But there is no magic.
Ontological Reality
It seems the objects required to describe the formalism (vectors in a Hilbert space) are just mathematical objects and that's it. They capture the language of statistical transitions in a matrix. They are abstractions. To assign ontological reality to these abstractions, for example to think of superposition as something 'real' is what leads us to magic in the first place. What the walker framework says is that indeed the layer at which classical QM is formulated (the state function) cannot be interpreted. It is a mathematical abstraction. 'Shut up and calculate' was indeed the right frame of mind to have. Today a lot of people think that the 'real' objects exist at a lower layer of reality than the 'standard model'. In this particular walker picture, QM would be a sort of an emergent *statistical* composite model.
The QM objects were not the 'real objects' we were looking for.
QM as emergent objects
Strictly speaking 'the standard model" needs to be emergent from any candidate underlying framework. In all generality whatever model you adopt as your underlying reality has to have the standard model somewhere downstream as an emergent property in its logical consequences. Whatever emerges from our axioms has to conform with QCD and the Standard Model. Period.
So the trick is to show that QM is an emergent phenomena. The behavior that emerges from our models and the experiments are QM analogs, but they are not the 'real thing'.
Computational chaos
The article mentioned the latest Couder experiment, the ones with the elastic force, this setup leads to chaotic path that shows intermittence and self-quantization of orbits of the walker. It is memory induced quantization. But it takes either the experiment or computers to 'observe' them. Today we try to characterize *how much chaos* we need to create the proper QM-like behavior.
Historical detour
The article did a great job with 'the history of physics' approach. I can project myself to this famous conference where the Copenhagen formalism won the day. I can see a young Louis De Broglie waving his arms about, talking about an inventive but unpractical wave/particle model and Bohr simply dropping his magical (and mathematically expedient) matrix formalism with Einstein choking on his bagel in the front row that, "God doesn't play dice". In hindsight, it is normal that this formalism would win the day simply because it was the only one that could predict things correctly. The Copenhagen model was a working model while the particle/wave one was a 'philosophical' model. Bohr et al, chose a more abstract starting point, sacrificed understanding in favor of 'working physics'. It was a faustian bargain (captured somewhere in a play) that I would personally take. Those in the article who lament the choices made are in turn a little naive. Only with today's powerful computers and the clever experiments, which were encountered by chance, can we make predictions and build a more fruitful mental picture.
In any case, it was not only the easy way (mathematically), it was also the only way (no computers) and it turned out to be the right way. What more do you want? An explanation as to why?
Of Philosophy and in praise of 'understanding'
So I will not disagree with the quotes in the article and their famous authors that this line of work while "mind-expanding" is clearly a philosophical endeavor. Indeed, personally it is also why I am attracted to it as well. I glamorize it as 'old school' natural philosophy. Someone's got to do it.
Those who point out that recreating the standard model is going to take a long time are probably right. Those who point out that it is futile since we already have said standard model are only partially right.
To me it is obvious that understanding QM in terms of wave/particle composite objects, probably different than these walkers and going through the rebuilding of the standard model as Leggett advises will yield new and valuable insights. To me clearing up the air around superposition and decoherence was already reward enough.
Local causality vs non-local correlations. Einstein-Podolski-Rosen in walkers
Finally, as pointed out in the comment section, it would be significant if the framework had something to say about the statistics of Bell inequalities. It has been one of my starting points on the walker research for the simple reason that the formalism was getting too abstract and I needed *something* to guide my intuition. Spooky action at a distance has been bothering me for about 20 years ever since I studied them under Aspect. The logical flow would then be to take 2 walkers let them interact as they drift apart and see if there are non-local correlations. It should be noted that the correlations build up over time, like in a memory, it is path dependent and it is not the all or nothing picture that was used for the hidden variable in the Bell theorem derivation. The variables are not set at birth but rather developed chaotically over time. Some correlation will build up and show up.
And to the walkers I say: God Speed.
Comments