Wednesday, August 13, 2014

The part where I take on 3 Nobel prize winners.

A great article from the Simons foundation came out back in June covering the walker research.   These objects, a wave/particle association, are attracting attention because they have the capacity to self-excite themselves into quantized orbits and reproduce an increasing catalog of quantum behavior previously only observed in electrons but never in 'classical' systems.  For a more in depth look at the walkers, the article does a good job. 

The article interviews 3 Nobel prize winners.  


Here is what t'Hooft had to say (from the article): "Personally, I think it has little to do with quantum mechanics,” said Gerard ’t Hooft, a Nobel Prize-winning particle physicist at Utrecht University in the Netherlands. He believes quantum theory is incomplete but dislikes pilot-wave theory.
Wilczek was just as encouraging: "Many working quantum physicists question the value of rebuilding their highly successful Standard Model from scratch. “I think the experiments are very clever and mind-expanding,” said Frank Wilczek, a professor of physics at MIT and a Nobel laureate, “but they take you only a few steps along what would have to be a very long road, going from a hypothetical classical underlying theory to the successful use of quantum mechanics as we know it.”
And finally: "Anthony Leggett, a professor of physics at the University of Illinois, Urbana-Champaign, and a Nobel laureate. “Whether one thinks this is worth a lot of time and effort is a matter of personal taste,” he added. “Personally, I don’t.”

Personally, it's about understanding. 

The puzzling thing about QM is that it is a very successful theory and we have no idea why. We don't know where it comes from.  QM, from a logical standpoint, really is structured as a set of axioms (the postulates of symmetries) and then a lot of recipes on how to make it all work. And it works really well, it is an observed theory complete with observed predictions (Higgs).  What justifies the choice of symmetries is their result, they work.   Epistemologically, the puzzling bit of it is that we don't know where those symmetries come from, we simply hypothesize them. 


As pointed out in the article and comments, certain "working" physicists don't care about the 'why' at all. They don't need to, they don't want to. It is a toxic choice.  To them it's philosophy.

Personally I only care about the why.  Let's not forget that understanding the 'why' is a big motivation in physicists to begin with.  That's why we do it. 

Lack of Interpretation

I always cringe when people talk about the "Copenhagen interpretation".  The 'Copenhagen' is not an interpretation.  It is a formalism, with bits of stories and smiling cat interpretations bolted around it.  

I will always remember Alain Aspect yelling at me as I was a wide eyed undergrad trying to make sense of QM and his 'spooky action at a distance' experiment. I needed to stop "interpreting" the formalism, it would lead me nowhere and that some people were very very happy NEVER interpreting the formalism.   

In retrospect the reason is rather simple in that that if you try to 'interpret' the formalism with everyday  categories you quickly start talking about magical stuff such as superimposed cats (/electrons) that exist in multiple states at the same time. It also leads to this superposition instantly disappearing, in a mysterious "wave packet collapse" that supposedly seems to happen instantly in the EPR class of experiments (see below).  

The 'interpretation' of the formalism wasn't magical, it was rather simply 'not there'. 

Objects for interpretation


What the walker framework does for me is equip me with tools, objects, categories and abstractions with which to think about QM.  All of the sudden I have a picture in my mind. Not just a math formalism.  And this picture helps me think clearly, it guides the intuition. Whether it is right or wrong is almost besides the point at this stage.  I can write industrial grade code on multi-core machines at home (in java) and explore what the computational models say.

The mental picture and physical intuition that one derives from the walker experiment is indeed 'mind expanding'.  For example it is rather simple to revisit superposition and decoherence in this framework, see here for an entry on the re-interpretation of these notions.  In this approach the Schrodinger cat is always dead, superposition is a mind view, it never existed, it is replaced by 'chaotic intermittence' and the notions of coherence and decoherence take on a very definitive form in contrast to the magical wave collapse that is supposed to happen upon measurement and observation.  

In this picture the wave collapse is a false category.  It is replaced by a notion of intermittence between states (the particle can self excite into those states) mediated by the presence of chaotic dynamics in the wave/particle duo.   In short, the particle exist in a sort of 'superposition', where there are many states and we just transition between states with probabilities that remind us of "transition probabilities" of QM.   The difference with the classical superposition is that our particle are not in those several states *at the same time*.  There is no superposition per se but rather intermittence between those states at different times. The walkers will self-quantize their orbits in discrete states, just like an electron does and they will oscillate between those states (the intermittence) just like QM particles are supposed to do albeit with everyone at the same time.  

Reality really exists.



As one commenter on the Simons article said "As stated by Bell’s theorem one has to abandon either locality or reality and it seems that for most physicists (me included) giving up on reality is the favored choice."

But I am honestly not too concerned about comments like this one. It is just going to take time.  Prof Bush whose mathematical models provided the starting point for many simulations including ours put it very well in the article: "The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”

The walker framework is already taught by Prof Anderson and Brady with the mental pictures of the walkers to explain things like 'spin'.  I wish I had had those classes as a undergrad instead of the "Copenhagen". 

As a matter of taste, I am firmly convinced that the "Copenhagen interpretation" will fade eventually as a deeper understanding emerges, possibly informed by the walker picture.  It will be seen as not an interpretation at all, but simply a formalism, "The Copenhagen Formalism". 

But there is no magic. 

Ontological Reality

It seems the objects required to describe the formalism (vectors in a Hilbert space) are just mathematical objects and that's it.  They capture the language of statistical transitions in a matrix. They are abstractions. To assign ontological reality to these abstractions, for example to think of superposition as something 'real' is what leads us to magic in the first place.  What the walker framework says is that indeed the layer at which classical QM is formulated (the state function) cannot be interpreted.  It is a mathematical abstraction.  'Shut up and calculate' was indeed the right frame of mind to have.  Today a lot of people think that the 'real' objects exist at a lower layer of reality than the 'standard model'.  In this particular walker picture, QM would be a sort of an emergent *statistical* composite model. 

The QM objects were not the 'real objects' we were looking for.  

QM as emergent objects
Strictly speaking 'the standard model" needs to be emergent from any candidate underlying framework.  In all generality whatever model you adopt as your underlying reality has to have the standard model  somewhere downstream as an emergent property in its logical consequences. Whatever emerges from our axioms has to conform with QCD and the Standard Model.  Period. 

So the trick is to show that QM is an emergent phenomena.   The behavior that emerges from our models and the experiments are QM analogs, but they are not the 'real thing'.  

Computational chaos


The article mentioned the latest Couder experiment, the ones with the elastic force, this setup leads to chaotic path that shows intermittence and self-quantization of orbits of the walker.  It is memory induced quantization. But it takes either the experiment or computers to 'observe' them. Today we try to characterize *how much chaos* we need to create the proper QM-like behavior.   

Historical detour

The article did a great job with 'the history of physics' approach.   I can project myself to this famous conference where the Copenhagen formalism won the day.  I can see a young Louis De Broglie waving his arms about, talking about an inventive but unpractical wave/particle model and Bohr simply dropping his magical (and mathematically expedient) matrix formalism with Einstein choking on his bagel in the front row that, "God doesn't play dice".  In hindsight, it is normal that this formalism would win the day simply because it was the only one that could predict things correctly. The Copenhagen model was a working model while the particle/wave one was a 'philosophical' model.  Bohr et al, chose a more abstract starting point, sacrificed understanding in favor of 'working physics'.  It was a faustian bargain (captured somewhere in a play) that I would personally take.  Those in the article who lament the choices made are in turn a little naive.  Only with today's powerful computers and the clever experiments, which were encountered by chance, can we make predictions and build a more fruitful mental picture.  

In any case, it was not only the easy way (mathematically), it was also the only way (no computers) and it turned out to be the right way. What more do you want? An explanation as to why?

Of Philosophy and in praise of 'understanding'


So I will not disagree with the quotes in the article and their famous authors that this line of work while "mind-expanding" is clearly a philosophical endeavor.  Indeed, personally it is also why I am attracted to it as well.  I glamorize it as 'old school' natural philosophy. Someone's got to do it. 

Those who point out that recreating the standard model is going to take a long time are probably right. Those who point out that it is futile since we already have said standard model are only partially right. 

To me it is obvious that understanding QM in terms of wave/particle composite objects, probably different than these walkers and going through the rebuilding of the standard model as Leggett advises will yield new and valuable insights. To me clearing up the air around superposition and decoherence was already reward enough. 

Local causality vs non-local correlations. Einstein-Podolski-Rosen in walkers


Finally, as pointed out in the comment section, it would be significant if the framework had something to say about the statistics of Bell inequalities.  It has been one of my starting points on the walker research for the simple reason that the formalism was getting too abstract and I needed *something* to guide my intuition.  Spooky action at a distance has been bothering me for about 20 years ever since I studied them under Aspect.  The logical flow would then be to take 2 walkers let them interact as they drift apart and see if there are non-local correlations. It should be noted that the correlations build up over time, like in a memory,  it is path dependent and it is not the all or nothing picture that was used for the hidden variable in the Bell theorem derivation.  The variables are not set at birth but rather developed chaotically over time.  Some correlation will build up and show up. 

And to the walkers I say: God Speed. 

Sunday, June 8, 2014

Walkers vs Surfers: the computational view.

This post is going to be highly technical and will actually speak to about 20 people in the world (if that).  So if you are a casual reader of this blog you should skip this one and instead focus on the more general considerations of the walker study as it relates to the interpretation of Quantum Mechanics

We are going to review the physics of the walker problem in the context of computational models. This will deal with the in-silico approaches to the problem and shed some light on what we call 'surfers vs walkers'.

It is also meant as a 'checkpoint journal entry' for the participants of the facebook group studying the walkers.

The walker problem

The object of study is the association of a particle and the wave it creates in a media.  We focus on the silicon walkers as observed by Couder et al and modeled by Bush et al. The physics is rather straightforward in the Newton (force/acceleration) view.  The forces are as such:

  1. Slope force. Aka "field force". Each time the particle bounces it creates a Bessel standing wave.  These waves sum (interfere) and create a wavefield.  The gradient of the wavefield at the point of impact gives an impulse to the particle when it bounces off the surface (like a ball bouncing off an inclined surface). The important points here are a/ the memory of the field (loosely defined in the literature as the number of waves that contribute to the wavefield). It is the main control parameter in our studies.  As the memory increases so does the height of the total wave (you sum many little waves). For example in our simulation, the basic wave is of height 0.02 faraday length and the sum is about 0.1.  b/ this is a DISCRETE set of waves. 
  2. Viscous force. Every time the particle bounces on the surface it is slowed down. This force is proportional to the speed.
  3. Elastic force. This force was introduced recently by the Couder team in a technical tour de force. They injected the bouncing particle with ferromagnetic material and submitted the particle to a centropotential EM field.  While it is described as a EM force, it is akin to an elastic potential and the force is modeled as -kx with the exact form of a elastic force.  The author prefers the 'elastic force' description for various reasons not having to do directly with the QM interpretation.  It is in this elastic potential that the walker system exhibits intermittence and provides the most intriguing insights into coherence/decoherence and the superposition principle. 
These 3 forces are the ones identified in the problem so far.  It should be stressed that the 'historical' walker study focused on 1 and 2. For example the initial Couder/Fort paper balanced the field force with the viscous force to calculate the speed of walkers. The Bush papers dealt with 1 and 2 as well.  The elastic force is a recent addition in experiments. 

The Bush approximation: analytical solutions

The mathematical description of the hydrodynamics at play in the walker system has been done by Bush et al.  Essentially they make analytical way by approximating the discrete sum of Bessels by a integral.  Long story short, instead of a discrete sum, we hypothesize the intermediate steps and calculate with a step that is shorter than the actual bounce.  This is not physical, in the sense that these intermediate waves do not exist in reality but they are expedient from a mathematical standpoint.  We are able to compute the trajectory with this 'smooth' approximation by inverting matrices.  

Surfer vs Walker

In the case of the Bush approximation the particle is always informed by the wave. The mental picture is 'the surfer'. The waves are constantly generated and driving the particle, the particle surfs on its waves.

The silicon droplet system is a 'walker' system. The particle just 'bounces' off the surface and only picks up an acceleration at the time of impact.   

The important part here is that surfers assume a continuous set of waves generated along the path, while the walkers deal with a discrete set of waves.  This really impacts the 'field force' only.

Integration routines

The main advantage of the 'surfer' approximation is that following Bush et al, we can compute an exact solution to the problem by inverting matrices.  This results in a smooth flow and has been implemented by the Dotwaves crowd in Python (by Burak Budanur) and Matlab (by Samuel Bernadet).  The disadvantage of this routine from a computational standpoint is that it is heavy and therefore slow. "Slow" is a relative term in computing and a lot of information is coming from these simulations, they also recover many of the features of the 'real' walkers. The important part from a model standpoint is that all forces (including field force) are continuously integrated and smoothly changing.

The discrete routine has been implemented in java (by Marc Fleury) and it's main advantage is that it is very fast (by an order of 10000) so allows for quick visual inspection of the behaviors.  The disadvantage is that the integration routine, if done naively, introduces a lot of noise.  To address this noise, and after a lot of prodding, the author has implemented a hybrid walker integration.  The integration routine has been refined as such: 
  • Field force is still discrete (there is a discrete set of waves in the real problem) and the integration done in one step. In the future we may move the force to a 'Runge Kutta" like average but this is not straightforward and will be arbitrary.
  • EM force is continuous. The integration routine has been smoothed out to account for the continuous nature of the EM force (it applies when the particle is in flight) and gives us much a much smoother integration. 
  • Viscous force is continuous.  This is debatable as the viscous force in the real experiments only apply when the particle is in contact with the liquid.  It does however increase the smoothness of the integration routine.
Physical accuracy

In a nut-shell the main features of walkers integration in java is a discrete treatment of waves field with a continuous EM and viscous.  The surfer integration is a continuous treatment of all forces including the field force.  The discrete nature is closer to the physical picture (as the walkers only create discrete waves, as simple as that).  The speed is a secondary point, as we can run both types of simulations over long periods of time. 

Noise: discrete vs continuous

The a posteriori justification for the surfer approximation, besides the fact that it is mathematically expedient, is that it seems to replicate the observed facts about walkers at least in the straight walker regime (from the published literature) and gives us good results with the elastic force in the dotwaves efforts. 

However we know that discrete sets of waves create a different wavefield than continuous sets of waves, specifically at short distance. Tesselations are important in field theory, continuous and discrete sets giving different results.  It should be pointed out that the walkers orbit at about 1 faraday length, meaning are subjected to the short term structure of the field.  The presence of a 'lattice' (discrete points) as opposed to a continuous source of waves gives very different wavefields in general.  

In other words, the surfer approximation introduces noise of it's own in the computational simulations (mathematical in nature) that is not present in the physical system or the walker implementation. 

Surfers vs Walkers: the 3D SURFER case and QM analog 

The interest in these systems comes from the fact that they offer a compelling mental picture with which to understand the (obscure) formalism of QM.  As detailed in the blog referenced above it is the emergence of intermittences, characteristic of chaotic systems, that gives us the transition probabilities between states and a clear image of what 'superposition' means.  

However this walker system is inherently 2D.  And because we are 2D we have a notion of "steps" and the walker image only can exist in 2D, we bounce in the vertical direction and create discrete waves on the 2D surface.  In 3D, the particle would ALWAYS be in contact with the media and would be generating waves continuously.  In 3D we cannot escape to another dimension to only create discrete waves. 

The 3D case can only be a SURFER case. 


Sunday, June 1, 2014

Superposition, Decoherence, Schrodinger's Cat and other magical lies my professors told me.

For any student of Quantum Mechanics (QM), the interpretation of the QM formalism is at first a puzzling proposal.  Simply put it is 'counter intuitive' and most people "shut up and calculate" essentially bowing to the myth that "QM is very weird".

How can things be in 'several states' at the same time. QM matter must be of a different, slightly magical, nature.  It is perhaps best exemplified by paradox of the Schrodinger cat that is both dead and alive, supposedly at the same.

In this post we will apply the formalism of walkers, an emergent model of QM dynamics that comes about from the association of a particle and a wave and show how it sheds a new light on the fundamental interpretation of QM and show that in this interpretation, the cat is simply always dead.

We will also use this formalism to shed light on the typically QM notion of coherence and decoherence.

We study the walkers by simulations and our little group operates on facebook:


A brief overview of Walkers. 



Walkers are the association of a particle bouncing on a liquid silicon surface and the waves this bouncing creates. Everytime the particle bounces on the surface it creates a standing wave (modeled by a Bessel function in 2D) which in turn drives the particle.  The sum of the history of bouncing informs the future bouncing.  When the particle bounces it picks up an acceleration due to the gradient of the local surface (like a surfer on a wave). This dynamical system, the association of the particle and its wave exhibits interesting self-quantized dynamics.

A strict interpretation of the deBroglie wave/particle duality.

Louis deBroglie, one of the fathers of QM, postulated wave/particle duality.  Observing that sometimes matter behaves as a wave and sometimes as a particle, he postulated a relation between the energy of a particle and it's 'deBroglie' wavelength in his PhD thesis. This earned him the Nobel prize in 1929. The walker are a strict implementation of this idea as we have both a particle and its adjoined wave.

Surfers vs Walkers

This point is a little technical but is worth mentioning.  The walkers 'bounce'.  Each bounce creates a wave.  This is a 'discrete' phenomena as opposed to 'continuous'. We sum a finite number of waves.  A continuous phenomena, requiring integration, would be a 'surfer'.  It is worth mentioning because some of the current formalism for the walkers (most notably the Bush formalism we use in this research) is really a 'surfer' formalism, assuming a continuous wave creation along the path.The surfer creates the waves it surfs on.

Self-excitation, Self quantization. QM behavior

The dynamics of this system is interesting in it's path.  The particle will self-excite and start 'walking' as seen in the video.  More importantly to the QM study, some of the behavior observed mimics QM.  For example if the particle is submitted to an elastic potential (by way of an EM field) it starts orbiting and showing 'quantized' (discrete sets) of possible orbits.  The path cannot be "anything" it is quantized by it's own history, the path creates the path.  The history quantizes (by offering a discrete set of possibilities) the future.

Chaotic Dynamics



To understand the solution, pay close attention to the video above. It is a run captured in matlab by Heligone of dotwave.org. The point is that the orbit goes from trefoils to ovals and (not seen here) sometimes circles.  These are the quantized orbits.  The change between these orbits is called 'intermittence' and is a characteristic signature of chaotic dynamics.  The dynamics will randomly change between these stable, discrete, orbits.

Superposition revisited as intermittence

If one observes these objects over long periods of time, one can compute the 'time' the particle spends in each orbit.  With this mental picture we can revisit QM.  It is a different interpretation than QM.  In QM, the particle is in 'several states' AT ONCE, meaning it would be in the oval and the trefoil at the same time. This is difficult to picture for macro objects such as the cat. In this model, the particle oscillates between those states over time, but NOT AT THE SAME TIME.  There is no superposition, just intermittence, an oscillation between the states with certain transition probabilities.

Weights as probabilities. 

If one computes the times, one can arrive at the probability one would observe the particle in a particular state. These are transition probabilities.  This formalism demystifies the Hilbert space formalism that says that the particle is in all the states with various probabilities.  Here there is a logical and simple interpretation for those 'weights' they are the probabilities to find the particle in the particular quantized states available to the dynamics.  It is simply the time it spends in each quantized orbit.

Schrodinger's cat is always dead

Let's revisit the Schrodinger cat paradox in this picture.   The idea that a cat would be dead and alive has confused generations of students for the past 100 years and while it makes for good magical mythology it is just false logic in this picture.  More importantly, it is the act of observing it that kills the cat. There are several problems with the paradox: a/ we are not in 2 states at the same time.  In this model we either are in orbit A or B but not in both at the same time. We oscillate between those states at different times. We are only in ONE state at a time (even if changing rapidly). b/ the category dead/alive are different than intermitting between orbit a and b. Simply put we cannot 'intermit' between dead and alive like we can jump back and forth from orbit A to B and vice versa.  But ounce you reach "dead", you stay dead.  So essentially the particle would eventually orbit to 'dead' state for the cat, and once it has done it, the cat IS DEAD. End of story.

Observation

In this picture the idea that observation is what killed the cat, which is the usual interpretation, is laughable.  You didn't magically kill the cat with your thoughts, the cat died because the particle decayed. End of story.

But Quid of Decoherence and the measurement problem.

This is technical but is the heart of the issue at hand.  In QM, the act of observation is what creates the 'wave collapse' and 'decoherence'.  The system is classical, not QM after the act of observation.

There is a big difference in the walker system: Essentially in the case of walkers, the observation of the particle (here in silicon and silico) DOES NOT DESTROY THE FIELD.  This is the important part. If the act of measuring interferes with the field (uses tools that are on the order of magnitude of the deBroglie wavelength), which is the case for 'classical QM" (remember the observation of slits in the Young double slit experiments is enough to destroy the QM result) then one loses the chaotic dynamics induced by the field. Observing a QM dynamic induced by a 'field' destroys the field. So the QM dynamics stop the moment we 'observe' them.  The measurement problem is at the heart of our problems with QM.

The notion of 'coherence' and 'decoherence' which is usually magically associated with the existence of 'superposition' and it's disappearance (the mythical wave collapse) is very simply the existence of intermittence due to chaotic dynamics.  "Coherence" in the walker/surfer picture is the existence of the association of the particle and the field and the resulting chaotic/intermittent dynamics.  Destroying the field destroys that dynamic. the dynamic becomes classical dynamics and ceases to be quantum dynamics. Observing a QM system destroys its QM dynamics if one destroys the field. Decoherence is the destruction of the wavefield self-generated by the particle bouncing about and thus the end of the QM dynamic. QM dynamics is a chaotic dynamics brought about by memory of its past via a wavefield that is easy to destroy.

And that is all there is to it.

A brief historical philosophy consideration

In conclusion, the latest Couder work, simply explains many of the QM behaviors and the magical coherence/decoherence and seems very simple in retrospect. Why wasn't this 'particle/wave' duality taken to it's logical conclusion by the founding fathers of QM? One has to project back to the Solvay conference in 1927, where deBroglie proposed these ideas and was shot down by the Copenhagen school. It is not hard to see why.

To make headway with these models (which we do in the 21st century) one needs the formalism of chaotic dynamics (which was developed in the 80's and 90's) and powerful computers to run the integration as there is little headway to be had analytically.  In short, while philosophically satisfying, the wave/particle duality interpretation is difficult from a practical standpoint.  In front of this model there was the straightforward (if magical) interpretation of the Copenhagen school: namely a vectorial superposition of states in a abstract 'hilbert space' that could give practical results in calculation.  The 'shut up and calculate' mentality was justified as it gave us the atom bomb, most the technological advances of the late 20th century (think lasers, computer spin storage etc) and the CERN Higgs. The practical nature of the formalism was justification enough. The interpretation was secondary.

There is no magic.

Tuesday, December 10, 2013

The resurgence of Hidden Variables.

No hidden variables, EPR. 
I studied quantum mechanics (QM) under the tutoring of A.Aspect.  In the 70's, while at CERN, Bell had shown that so called 'hidden variable' models of QM (the one championed by Einstein) and proper QM models gave different predictions.  Aspect's experiment in the 80's had ruled in favor of QM. Hidden variables models were a no-go. 30 years later the debate still isn't settled. 

30 years later a hidden variable analog emerges.
 The 2006 Walkers of Couder (ENS Paris) were profiled in 'through the wormhole'. They have recently been re-filmed and precisely mathematically modeled by John Bush at MIT.  See the accompanying video. I have started modeling these 'walkers' in java software code. 

This is a silicon bath (viscous) with damping standing waves excited at the frequency of Faraday. The combination of standing waves in confined spaces gives a phase where the particle and the wave are in sync. The particle creates the wave, the wave guides the particle. The resulting system is unstable and starts walking in specific phases close to the Faraday instability.  The particles are literally "walking on water".

The deBroglie promise, 1927
This also has been attracting attention because it is an analog of a deBroglie-Bohm system.   Taking the wave-particle duality idea completely literally, deBroglie thought about the problem by talking about a wave at the deBroglie wavelength moving at the same frequency "as if a clock". Why this singularity is there in the first place is mentioned by deBroglie, it is 'the matter', today we think of them as 'solitons'. DeBroglie however studies the wave that englobes the particle.  It guides the particle. 

DeBroglie (pronounced 'duh-bro-eee') presented the basic idea in 1927 at the Solvay conference.  There he got shut down by Bohr and Heisenberg, who were developing their own interpretation of Schrodinger and the Copenhagen interpretation won the day.  Compared to deBroglie, the axiomatic presentation was by far the simplest. It postulated the randomness, instead of trying to recreate it, and it postulated a vectorial space which allowed for comparatively simple calculations. 

In retrospect deBroglie's proposal is complex as it involves non-linear math.  They had no computers to explore these.  Towards the end of his life, deBroglie admitted that randomness needed to be re-introduced manually in his system anyway.  In 1957 deBroglie put out an analytical study of 'la double solution' and 'theorie de la mesure'. They are fantastic reads.  


God does play dice. 
Bell showed in the 70's that a  'hidden variables' systems would obey his famous inequalities. However the walker system falls under the 'stochastic (hidden) variable" category, not the same hidden variables Bell uses.  Couder when talking at Perimeter Institute, in the Q&A session, vaguely invokes path memory to answer the question of Bell.   

It prompted me to go to the source and read Bell's "Speakable and Unspeakable in Quantum Mechanics".  Bell was a convincing deBroglie-Bohm evangelist while at CERN.  I reread the proof he came up with and the newer variants.  I have tried to understand this idea of the path memory.  I approach the problem computationally in java. The variables one uses here have chaotic patches. 

The computational image,  
The picture that comes into focus from the computational angle is one of a stochastic dynamic system. Like a pachinko, the path a give particle is taking is in fact random within a bigger order. Chance and randomness emerge from the dynamics. Just like QM models  and unlike "deterministic hidden variables".  Those are 'random hidden variables'. 

The path is everything.
Seen as a computational process, at each step the bouncing droplet is accelerated by the slope it is at, that is the 2D picture. It already creates complexity.  The wave at each point (you can generalize this to 3D) is the sum of all waves emitted earlier from a collection of points and reaching a concrete point.  Think reverb for musicians.  All the influences from the past sum up at each point of the field. And that is the local information that accelerates the particle.  If it oscillates "like a clock" (in the words of deBroglie) this feedback gives rise to very complex dynamics even in 2D.  It includes "echo-location" information about all the sources, including those bouncing on a surface. Path memory is an encoding of the geometry. The greater geometry reappears in the resonant modes of the cavity, something that drives the expectations via standing waves.  How standing waves appear, a central construct of deBroglie, is here seen as emergent and due to geometrical constraints.

Elements of stochasticity. 
Computed wavefield in java simulation
Standing waves and uniform velocity.

 Here pictured is a single "water walker" (based on the Couder/Fort formula) with constant speed. This is the equivalent of a "HelloWorld" for this class of problems and took me about a week worth of work so the data may just be all garbage.  But most areas are predictable and in the middle randomness occurs. These are standing waves interfering resulting from a periodic walk on a surface.  

Bell's Houdini: Stochasticity
The walkers escape Bell's construct. Bell assumes a distribution of the hidden variable that is set at the time of preparation of the entangled pair and doesn't vary after.  It is not time dependent. The randomness is all in the preparation.  Here the randomness is continuously injected into the system along the path. 

Emergence of random walk.
In this picture, the progress is stochastic, each step like a pachinko where randomness is introduced over the length of the path. When the wavefield gets too chaotic, as in some spots in the middle of the picture above, then randomness is introduced. In the Bush video for regimes close to faraday instability the surfaces washes over these details, since the oscillations are too frequent to have a physical meaning and in Bush you see the emergence of a random walk around the Faraday instability. The frequency of oscillation in the simulations suggest a measure of 'how much random'.
  The path is randomized at the slit in the Couder experiments or those seen here. 

Emergent QM
Each particle will take a different path that will seem random.  But it will be guided by a wavefield that conforms to the geometry of the confining space. In the Bush video during the corral film, the parts where the particle is slowest are the parts where the particle will spend more time.  The probability of finding it there is proportional to the time it spends there. It maps to a probability density in QM.  This also is a prescription on a way to recapture the probability of presence computationally. To the right is a capture of a walker walking back from the slit.     This effect is decisively not QM. Most interesting. Captured by Heligone (dot/wave). 

Wednesday, November 20, 2013

General Relativity as extrinsic curvature and other lies my professors told me


This post is about how General Relativity (GR) is explained to the masses, specifically how one should picture curvature. Most popular science accounts use the 'rubber sheet' analogy.


Extrinsic curvature, curvature by embedding 2D in 3D. 
Extrinsic curvature or curvature by embedding

Consider the picture of the Earth rotating around the Sun. This is the classic picture most 'scientific american' type articles will throw your way to explain what curvature is.  It is the bending of a 2D surface in 3D. If you take a 2D rubber sheet and put some mass, it will deform and a particle will orbit around it.  This is a good picture in the sense that it is based on classic visual 3D intuition and reproduces the correct result for 2D sheets that deform.  But try generalizing it to 3D.

GR as extrinsic curvature by embedding 3D in 4D? 

A finer problem with this image is that it seems to imply that you should abstractly extend this construction from 3D to 4D.  The curvature is extrinsic coming from the bending of 3D in a higher (4D?) space.  You lose the visual guidance. Humans simply cannot visualize in 4D (except Hawkins). The math can guide you though. Another point is rather ontological. If you need a 4D to create the curvature by embedding, considering only extrinsic curvature, isn't that proof that 4D of space exists?

Intrinsic 3D. 
Intrinsic curvature

We just looked at extrinsic curvature and by definition of 'ex' it needs an extra dimension.  But there is also 'intrinsic' curvature that lives purely in 3D.  Can we use it to construct gravity? The picture shows how flat space, the cartesian grid gets deformed by matter. Send a photon by the earth, and it will follow geodesics (the lines) and it will be going 'straight in that curved space'. This does not depend on a hypothetical 4th dimension of space.  Why isn't this picture, that of intrinsic curvature, more used amongst practitioners to explain gravity?

The problem with smooth 3D intrinsic curvature: the Kaluza-Klein example

There is something missing in the intrinsic construction.   GR is modeled by a Riemann geometry.  Smooth 3D deformations of space cannot give the proper mathematics.  We cannot create the proper GR curvature with the only assumption of smooth fields in 3D.  In order to obtain a proper unification, Kaluza and Klein in the 50's had to hypothesize a 4th dimension.  There, with smooth fields they could recreate GR and incorporate EM.

Non-smooth deformations, non commutative algebra

However non-smooth deformations of space in 3D arrive at the proper curvature.   The mathematics of non smooth deformations (non-holonomic theories) have been explored during the 80s in the solid state physics.  The non-commutative algebra that result is fancy mathematics.  The main result however is that the presence of certain defects leads to a proper Riemann curvature. The proper Riemann curvature arises in 3D if one considers singular deformations of space, as if brought about by defects of a certain particular shape.

The ontological reality of defects as curvature.


Since we observe gravity and we model it with curvature (GR). Here are some choices on how to assign an element of reality to this elastic metric of space (Einstein's and MTW words).  It can either A/ come from embedding in higher dimensions.  Curvature is the result of smooth deformations. B/ Assume defects in 3 dimensions and be intrinsic deformation with no further appeal to extra dimensions. Einstein was looking for answers within smooth fields (A) and the math for non-commutative algebras, arose much later within the standard model and solid state communities. The non-commutative algebra turns out to be relevant in GR and the standard model.

Where the defects come about is a story for another day.

Feynman on the use of imaginary numbers

For anyone having done physics as a major, the use of imaginary number (i^2=-1) is as natural as breathing air.  In the 'shut up and calculate' sense, imaginary numbers are easy to work with, but only a fool would stop and ask 'why are we using imaginary numbers in the first place?". That part can be mysterious. Why would numbers that have no reality (no real number multiplied by itself is negative) find their way into physics? like most 'magic mysteries' of physics this one is hidden in plain sight. Most folks do not ever question the use of complex numbers in quantum theory.  Ask someone who knows a little and they will huff and puff with 'of courses', ask someone who knows a lot and some of them will pause and many will say "I don't know".

Enter Feynman: imaginary exponents
Whenever I want to get to deeper and more natural insights, I turn to Feynman. Feynman has a characteristic treatment of the imaginary exponents in his books (Lectures: chapter 22).  Feynman carefully re-derives what he calls "the most remarkable formula in mathematics"

(22.9) e(it)=cos(t) + i sin(t).

The gripes
The treatment is thorough and bears the aura of 'naturalness' but really involves 2 hand tricks that the untrained eye will miss (I missed it on first read).
a/ the derivation starts with a clumsy differential definition of the exponential, after 22.6 he just parachutes the linear development (10^t=1+2.3025t). b/ between 22.5 and 22.6 there is a misdirection where he simply writes that the conjugate of e(it) is e(-it). Both statements are simply parachuted in his case.

Just a phase
Just as characteristically Feynman shines in a few spots.  For example he develops his expose by using 10 as the power basis for imaginary numbers, not the usual e.  This underscores the arbitrary nature of the base.  We have become so accustomed to using e that we never really question why it is there, why e, why not pi?.  He shows that whatever you use as a power base, say P, P^(it)x^P(-it) = P^(0)= 1.  So that the norm of a complex exponent is always 1.

So the complex exponents are all on the complex circle of norm one no matter what the base. Using P or using e, is just a rescaling of the t factor since if e=P^s then it = it(s/s) and P^(it)=P^(its/s)=(P^(s))(it/s) = e^(it/s) by simple algebra of exponents.  The exponent can be identified with a phase on the unit complex circle and when we are in e base then the phase length is 2pi, yielding an immediate mapping between the phase and with the arc-length of a circle of radius 1.

Differential equations
The purely algebraic treatment of the properties of imaginary exponentials is followed by the chapter on resonance which is really the application of said numbers to differential equations. Feynman clearly says one should always take the real part of the equations and that the imaginary part is non sensical.  He insists on the fact that this is possible only with linear equations so we are never multiplying imaginary numbers and making them real, thereby mixing them (23-1).

The intellectual honesty of Feynman is remarkable, no other textbook or teacher I have ever had goes to so much length to establish the validity of the basic constructs. In the case of linear differential equations, the solving can be done by inspection by replacing the differentiation by a iw factor. The formula for resonance just pops out. The use of imaginary number finds a natural justification as a powerful tool to solve basic differential equations.

More gripes re: QM
Feynman is rigorous up to that level. In this sense imaginary numbers are a mathematical trick to deal with linear differential equations.  Yet in Quantum Mechanics (QM), which he himself contributed so much to via Quantum Electro-Dynamics (QED), one takes the square of amplitude mixing real and imaginary in order to find probabilities and non linear equations are the cat's meow in solid state physics anyways, where we will have powers of the imaginary numbers, again mixing real and imaginary components. As a matter of fact interference is modeled by this mixing. The use of imaginary numbers reverts to axiomatic status in QM (hypothesis), but epistemologically, the fact that it describes nature so precisely points to the fact that these imaginary numbers are a representation for something 'real', what that 'something' is (a phase, a twistor, whatever) is never really explained in most text book. Practitioners seldom question their practices.

Thermodynamics and grand ensemble statistics
More importantly and far from textbook land, one of the open frontiers of QM is why the path integral formalism looks so much like that of thermodynamics (average over large ensembles) but with (ixt) replaced by 1/T, where t is time and T is temperature.  One treatment is dynamic (time) the other static and comprises a classical temperature.  I asked that very question once to a Nobel worthy professor, he smiled and said "just replace it by 1/T and you are done" I pointed out it wasn't physical, that the dimensions didn't even match. When I repeated the same question 6 month and claimed I could just replace it by 1/T  he would say "aaah, but you can't do that, you need to explain why there is a "i" there, that is the mystery"... it made me smile...

This is the reason I pay so much attention to the presence and justifications for the presence of i in these equations. I have no doubt they should be there, after all the formalism works.  But if QM is indeed a statistical ensemble (non-deterministic) of things existing at the planck level, then something brings about the 'i' behavior, things that twist in a funky way, things that have a phase. I remind myself that planck constructs exist at 10^-35m length and that the standard model operates at 10^-15m, there are 20 orders of magnitude in between. In 3D that is a volume with 60 orders of magnitude small planck volume.  In other words, an electron is hardly a particle but seems to contain 10^60 things in it.  There are more planck things in a electron, than electrons on earth,  it would be surprising if the electron was not a statistical beast.  The standing of i in our statistical ensembles is then epistemologically important.

Tuesday, November 19, 2013

Wilczek Time Crystals

When Frank Wilczek releases something, the physics community usually pays attention.  His latest paper on Time Crystals lead to a bit of controversy.  I recently attended a talk given at Georgia Tech by Al Shapere, the co-author.

What are time crystals? 
The paper is mathematical in nature.  Shapere considers langrangians that are quartic functions of a phase speed.  Crucially the term usually associated with kinetic energy (the square one) is negative. This 4th order potential leads to the swallowtail catastrophe of Thom.  As a catastrophe it can make claims of generality and structural stability.  Furthermore the swallowtail shape gives us several values that minimize the lagrangian, it is said to be 'multivalued'.  This multi-valuedness in the potential means there are several states in the ground state.  The system can oscillate between those states (since they have the same energy) and the ground state is thus dynamic and periodic. The periodicity is a bit of a 'leger de main' in the sense that Shapere waves his hands saying "if phi is a phase, the linear dependency in time makes it a periodic phenomenon".

The Bruno controversy
Shapere was asked about the controversy around his paper.  Bruno, a french mathematician has raised the objection that the negative square part doesn't exist and considers instead a normal (positive) kinetic energy and claims to prove that time crystals are in fact impossible. Shapere acknowledged the paper but claims he is not looking at kinetic energy in the classic sense, simply lagrangian terms, expressed as the square of some phase velocity.

Cold atoms link
To emphasize the point during the presentation, Shapere established a link to cold atoms (where Georgia Tech is a leading expert).  There in some phase as you expand apparently you will start seeing potentials as the one considered in the paper. I am not an expert but it seemed plausible.

The swallow tail catastrophe
The point of the Rene Thom catastrophes is that they are generic and structurally stable. Meaning that they will appear in many systems as sure as the caustics at the bottom of the pool (which are the first and second order catastrophes). The swallowtail is such a catastrophe, it is likely to appear. That these forms would appear in nature seems rather sound and the link to cold atoms a welcomed one.

I believe.