Tuesday, December 10, 2013

The resurgence of Hidden Variables.

No hidden variables, EPR. 
I studied quantum mechanics (QM) under the tutoring of A.Aspect.  In the 70's, while at CERN, Bell had shown that so called 'hidden variable' models of QM (the one championed by Einstein) and proper QM models gave different predictions.  Aspect's experiment in the 80's had ruled in favor of QM. Hidden variables models were a no-go. 30 years later the debate still isn't settled. 

30 years later a hidden variable analog emerges.
 The 2006 Walkers of Couder (ENS Paris) were profiled in 'through the wormhole'. They have recently been re-filmed and precisely mathematically modeled by John Bush at MIT.  See the accompanying video. I have started modeling these 'walkers' in java software code. 

This is a silicon bath (viscous) with damping standing waves excited at the frequency of Faraday. The combination of standing waves in confined spaces gives a phase where the particle and the wave are in sync. The particle creates the wave, the wave guides the particle. The resulting system is unstable and starts walking in specific phases close to the Faraday instability.  The particles are literally "walking on water".

The deBroglie promise, 1927
This also has been attracting attention because it is an analog of a deBroglie-Bohm system.   Taking the wave-particle duality idea completely literally, deBroglie thought about the problem by talking about a wave at the deBroglie wavelength moving at the same frequency "as if a clock". Why this singularity is there in the first place is mentioned by deBroglie, it is 'the matter', today we think of them as 'solitons'. DeBroglie however studies the wave that englobes the particle.  It guides the particle. 

DeBroglie (pronounced 'duh-bro-eee') presented the basic idea in 1927 at the Solvay conference.  There he got shut down by Bohr and Heisenberg, who were developing their own interpretation of Schrodinger and the Copenhagen interpretation won the day.  Compared to deBroglie, the axiomatic presentation was by far the simplest. It postulated the randomness, instead of trying to recreate it, and it postulated a vectorial space which allowed for comparatively simple calculations. 

In retrospect deBroglie's proposal is complex as it involves non-linear math.  They had no computers to explore these.  Towards the end of his life, deBroglie admitted that randomness needed to be re-introduced manually in his system anyway.  In 1957 deBroglie put out an analytical study of 'la double solution' and 'theorie de la mesure'. They are fantastic reads.  


God does play dice. 
Bell showed in the 70's that a  'hidden variables' systems would obey his famous inequalities. However the walker system falls under the 'stochastic (hidden) variable" category, not the same hidden variables Bell uses.  Couder when talking at Perimeter Institute, in the Q&A session, vaguely invokes path memory to answer the question of Bell.   

It prompted me to go to the source and read Bell's "Speakable and Unspeakable in Quantum Mechanics".  Bell was a convincing deBroglie-Bohm evangelist while at CERN.  I reread the proof he came up with and the newer variants.  I have tried to understand this idea of the path memory.  I approach the problem computationally in java. The variables one uses here have chaotic patches. 

The computational image,  
The picture that comes into focus from the computational angle is one of a stochastic dynamic system. Like a pachinko, the path a give particle is taking is in fact random within a bigger order. Chance and randomness emerge from the dynamics. Just like QM models  and unlike "deterministic hidden variables".  Those are 'random hidden variables'. 

The path is everything.
Seen as a computational process, at each step the bouncing droplet is accelerated by the slope it is at, that is the 2D picture. It already creates complexity.  The wave at each point (you can generalize this to 3D) is the sum of all waves emitted earlier from a collection of points and reaching a concrete point.  Think reverb for musicians.  All the influences from the past sum up at each point of the field. And that is the local information that accelerates the particle.  If it oscillates "like a clock" (in the words of deBroglie) this feedback gives rise to very complex dynamics even in 2D.  It includes "echo-location" information about all the sources, including those bouncing on a surface. Path memory is an encoding of the geometry. The greater geometry reappears in the resonant modes of the cavity, something that drives the expectations via standing waves.  How standing waves appear, a central construct of deBroglie, is here seen as emergent and due to geometrical constraints.

Elements of stochasticity. 
Computed wavefield in java simulation
Standing waves and uniform velocity.

 Here pictured is a single "water walker" (based on the Couder/Fort formula) with constant speed. This is the equivalent of a "HelloWorld" for this class of problems and took me about a week worth of work so the data may just be all garbage.  But most areas are predictable and in the middle randomness occurs. These are standing waves interfering resulting from a periodic walk on a surface.  

Bell's Houdini: Stochasticity
The walkers escape Bell's construct. Bell assumes a distribution of the hidden variable that is set at the time of preparation of the entangled pair and doesn't vary after.  It is not time dependent. The randomness is all in the preparation.  Here the randomness is continuously injected into the system along the path. 

Emergence of random walk.
In this picture, the progress is stochastic, each step like a pachinko where randomness is introduced over the length of the path. When the wavefield gets too chaotic, as in some spots in the middle of the picture above, then randomness is introduced. In the Bush video for regimes close to faraday instability the surfaces washes over these details, since the oscillations are too frequent to have a physical meaning and in Bush you see the emergence of a random walk around the Faraday instability. The frequency of oscillation in the simulations suggest a measure of 'how much random'.
  The path is randomized at the slit in the Couder experiments or those seen here. 

Emergent QM
Each particle will take a different path that will seem random.  But it will be guided by a wavefield that conforms to the geometry of the confining space. In the Bush video during the corral film, the parts where the particle is slowest are the parts where the particle will spend more time.  The probability of finding it there is proportional to the time it spends there. It maps to a probability density in QM.  This also is a prescription on a way to recapture the probability of presence computationally. To the right is a capture of a walker walking back from the slit.     This effect is decisively not QM. Most interesting. Captured by Heligone (dot/wave). 

Wednesday, November 20, 2013

General Relativity as extrinsic curvature and other lies my professors told me


This post is about how General Relativity (GR) is explained to the masses, specifically how one should picture curvature. Most popular science accounts use the 'rubber sheet' analogy.


Extrinsic curvature, curvature by embedding 2D in 3D. 
Extrinsic curvature or curvature by embedding

Consider the picture of the Earth rotating around the Sun. This is the classic picture most 'scientific american' type articles will throw your way to explain what curvature is.  It is the bending of a 2D surface in 3D. If you take a 2D rubber sheet and put some mass, it will deform and a particle will orbit around it.  This is a good picture in the sense that it is based on classic visual 3D intuition and reproduces the correct result for 2D sheets that deform.  But try generalizing it to 3D.

GR as extrinsic curvature by embedding 3D in 4D? 

A finer problem with this image is that it seems to imply that you should abstractly extend this construction from 3D to 4D.  The curvature is extrinsic coming from the bending of 3D in a higher (4D?) space.  You lose the visual guidance. Humans simply cannot visualize in 4D (except Hawkins). The math can guide you though. Another point is rather ontological. If you need a 4D to create the curvature by embedding, considering only extrinsic curvature, isn't that proof that 4D of space exists?

Intrinsic 3D. 
Intrinsic curvature

We just looked at extrinsic curvature and by definition of 'ex' it needs an extra dimension.  But there is also 'intrinsic' curvature that lives purely in 3D.  Can we use it to construct gravity? The picture shows how flat space, the cartesian grid gets deformed by matter. Send a photon by the earth, and it will follow geodesics (the lines) and it will be going 'straight in that curved space'. This does not depend on a hypothetical 4th dimension of space.  Why isn't this picture, that of intrinsic curvature, more used amongst practitioners to explain gravity?

The problem with smooth 3D intrinsic curvature: the Kaluza-Klein example

There is something missing in the intrinsic construction.   GR is modeled by a Riemann geometry.  Smooth 3D deformations of space cannot give the proper mathematics.  We cannot create the proper GR curvature with the only assumption of smooth fields in 3D.  In order to obtain a proper unification, Kaluza and Klein in the 50's had to hypothesize a 4th dimension.  There, with smooth fields they could recreate GR and incorporate EM.

Non-smooth deformations, non commutative algebra

However non-smooth deformations of space in 3D arrive at the proper curvature.   The mathematics of non smooth deformations (non-holonomic theories) have been explored during the 80s in the solid state physics.  The non-commutative algebra that result is fancy mathematics.  The main result however is that the presence of certain defects leads to a proper Riemann curvature. The proper Riemann curvature arises in 3D if one considers singular deformations of space, as if brought about by defects of a certain particular shape.

The ontological reality of defects as curvature.


Since we observe gravity and we model it with curvature (GR). Here are some choices on how to assign an element of reality to this elastic metric of space (Einstein's and MTW words).  It can either A/ come from embedding in higher dimensions.  Curvature is the result of smooth deformations. B/ Assume defects in 3 dimensions and be intrinsic deformation with no further appeal to extra dimensions. Einstein was looking for answers within smooth fields (A) and the math for non-commutative algebras, arose much later within the standard model and solid state communities. The non-commutative algebra turns out to be relevant in GR and the standard model.

Where the defects come about is a story for another day.

Feynman on the use of imaginary numbers

For anyone having done physics as a major, the use of imaginary number (i^2=-1) is as natural as breathing air.  In the 'shut up and calculate' sense, imaginary numbers are easy to work with, but only a fool would stop and ask 'why are we using imaginary numbers in the first place?". That part can be mysterious. Why would numbers that have no reality (no real number multiplied by itself is negative) find their way into physics? like most 'magic mysteries' of physics this one is hidden in plain sight. Most folks do not ever question the use of complex numbers in quantum theory.  Ask someone who knows a little and they will huff and puff with 'of courses', ask someone who knows a lot and some of them will pause and many will say "I don't know".

Enter Feynman: imaginary exponents
Whenever I want to get to deeper and more natural insights, I turn to Feynman. Feynman has a characteristic treatment of the imaginary exponents in his books (Lectures: chapter 22).  Feynman carefully re-derives what he calls "the most remarkable formula in mathematics"

(22.9) e(it)=cos(t) + i sin(t).

The gripes
The treatment is thorough and bears the aura of 'naturalness' but really involves 2 hand tricks that the untrained eye will miss (I missed it on first read).
a/ the derivation starts with a clumsy differential definition of the exponential, after 22.6 he just parachutes the linear development (10^t=1+2.3025t). b/ between 22.5 and 22.6 there is a misdirection where he simply writes that the conjugate of e(it) is e(-it). Both statements are simply parachuted in his case.

Just a phase
Just as characteristically Feynman shines in a few spots.  For example he develops his expose by using 10 as the power basis for imaginary numbers, not the usual e.  This underscores the arbitrary nature of the base.  We have become so accustomed to using e that we never really question why it is there, why e, why not pi?.  He shows that whatever you use as a power base, say P, P^(it)x^P(-it) = P^(0)= 1.  So that the norm of a complex exponent is always 1.

So the complex exponents are all on the complex circle of norm one no matter what the base. Using P or using e, is just a rescaling of the t factor since if e=P^s then it = it(s/s) and P^(it)=P^(its/s)=(P^(s))(it/s) = e^(it/s) by simple algebra of exponents.  The exponent can be identified with a phase on the unit complex circle and when we are in e base then the phase length is 2pi, yielding an immediate mapping between the phase and with the arc-length of a circle of radius 1.

Differential equations
The purely algebraic treatment of the properties of imaginary exponentials is followed by the chapter on resonance which is really the application of said numbers to differential equations. Feynman clearly says one should always take the real part of the equations and that the imaginary part is non sensical.  He insists on the fact that this is possible only with linear equations so we are never multiplying imaginary numbers and making them real, thereby mixing them (23-1).

The intellectual honesty of Feynman is remarkable, no other textbook or teacher I have ever had goes to so much length to establish the validity of the basic constructs. In the case of linear differential equations, the solving can be done by inspection by replacing the differentiation by a iw factor. The formula for resonance just pops out. The use of imaginary number finds a natural justification as a powerful tool to solve basic differential equations.

More gripes re: QM
Feynman is rigorous up to that level. In this sense imaginary numbers are a mathematical trick to deal with linear differential equations.  Yet in Quantum Mechanics (QM), which he himself contributed so much to via Quantum Electro-Dynamics (QED), one takes the square of amplitude mixing real and imaginary in order to find probabilities and non linear equations are the cat's meow in solid state physics anyways, where we will have powers of the imaginary numbers, again mixing real and imaginary components. As a matter of fact interference is modeled by this mixing. The use of imaginary numbers reverts to axiomatic status in QM (hypothesis), but epistemologically, the fact that it describes nature so precisely points to the fact that these imaginary numbers are a representation for something 'real', what that 'something' is (a phase, a twistor, whatever) is never really explained in most text book. Practitioners seldom question their practices.

Thermodynamics and grand ensemble statistics
More importantly and far from textbook land, one of the open frontiers of QM is why the path integral formalism looks so much like that of thermodynamics (average over large ensembles) but with (ixt) replaced by 1/T, where t is time and T is temperature.  One treatment is dynamic (time) the other static and comprises a classical temperature.  I asked that very question once to a Nobel worthy professor, he smiled and said "just replace it by 1/T and you are done" I pointed out it wasn't physical, that the dimensions didn't even match. When I repeated the same question 6 month and claimed I could just replace it by 1/T  he would say "aaah, but you can't do that, you need to explain why there is a "i" there, that is the mystery"... it made me smile...

This is the reason I pay so much attention to the presence and justifications for the presence of i in these equations. I have no doubt they should be there, after all the formalism works.  But if QM is indeed a statistical ensemble (non-deterministic) of things existing at the planck level, then something brings about the 'i' behavior, things that twist in a funky way, things that have a phase. I remind myself that planck constructs exist at 10^-35m length and that the standard model operates at 10^-15m, there are 20 orders of magnitude in between. In 3D that is a volume with 60 orders of magnitude small planck volume.  In other words, an electron is hardly a particle but seems to contain 10^60 things in it.  There are more planck things in a electron, than electrons on earth,  it would be surprising if the electron was not a statistical beast.  The standing of i in our statistical ensembles is then epistemologically important.

Tuesday, November 19, 2013

Wilczek Time Crystals

When Frank Wilczek releases something, the physics community usually pays attention.  His latest paper on Time Crystals lead to a bit of controversy.  I recently attended a talk given at Georgia Tech by Al Shapere, the co-author.

What are time crystals? 
The paper is mathematical in nature.  Shapere considers langrangians that are quartic functions of a phase speed.  Crucially the term usually associated with kinetic energy (the square one) is negative. This 4th order potential leads to the swallowtail catastrophe of Thom.  As a catastrophe it can make claims of generality and structural stability.  Furthermore the swallowtail shape gives us several values that minimize the lagrangian, it is said to be 'multivalued'.  This multi-valuedness in the potential means there are several states in the ground state.  The system can oscillate between those states (since they have the same energy) and the ground state is thus dynamic and periodic. The periodicity is a bit of a 'leger de main' in the sense that Shapere waves his hands saying "if phi is a phase, the linear dependency in time makes it a periodic phenomenon".

The Bruno controversy
Shapere was asked about the controversy around his paper.  Bruno, a french mathematician has raised the objection that the negative square part doesn't exist and considers instead a normal (positive) kinetic energy and claims to prove that time crystals are in fact impossible. Shapere acknowledged the paper but claims he is not looking at kinetic energy in the classic sense, simply lagrangian terms, expressed as the square of some phase velocity.

Cold atoms link
To emphasize the point during the presentation, Shapere established a link to cold atoms (where Georgia Tech is a leading expert).  There in some phase as you expand apparently you will start seeing potentials as the one considered in the paper. I am not an expert but it seemed plausible.

The swallow tail catastrophe
The point of the Rene Thom catastrophes is that they are generic and structurally stable. Meaning that they will appear in many systems as sure as the caustics at the bottom of the pool (which are the first and second order catastrophes). The swallowtail is such a catastrophe, it is likely to appear. That these forms would appear in nature seems rather sound and the link to cold atoms a welcomed one.

I believe.

Feynman "six not so easy pieces" notes on SR.

Like most grad student in physics, I wish I had had Richard Feynman as a teacher. For those who don't know the myth, he was a professor at Caltech, a member of the manhattan project, and a Nobel prize winner for the development of Quantum Electro Dynamics (QED).

Mostly, as an undergrad student, I admired him for his textbooks, the Feynman lectures, a collection of classes he gave at Caltech. It was love at first read. I remember being engrossed with the books sitting on the floor in a bookstore in Paris and just reading through the electromagnetic chapters. I was impressed at how clear and fluent the presentation was and how rigorously mathematical it was. Clearly Feynman was a man who liked to think through things, on his own terms.  Only when he had mastered  a thought would he teach it.  This is part of what makes him so interesting to physics students: the insights he developed. The magic has not changed and 25 years later, I still read Feynman when I want to get to the bottom of some things.

Special Relativity
Ironically what he struggled with he put in a little book called "six not so easy pieces". That book is mostly about SR. It is clear he struggled with SR, everyone does. Oh sure, we can calculate, a high school kid can compute a square root, what underlies the axiomatic of SR is still a subject of debate (and scorn in many academic circles).  But there he is at his best. Struggling, reaching for explanation.

He actually tried to make sense of dilating time in a very physical way by postulate that time was counted in clicks of a phenomena that would stretch in space as it would move through a real background.  The geometric pythagorean relation yielding the proper Lorentz factor. He immediately dismisses the insight as a toy model and that real clocks do not behave that way.

He unfortunately oscillates between 'perceives' and 'is' 2 important concepts which are usually glanced over in most SR treatments.  Some people like to interpret Lorentz transformation as 'projections' in our measurement system. This explains why to 2 observers time dilates for each without contradiction.  The explanation of the clock above is an example of a "IS" interpretation.  That the actual time spent in flight by a photon is proportional to the distance it travels and that that distance stretches by the geometric factor when the 'clock' is in movement.  An element of reality is attached to 'time'. The other interpretation the 'relativistic' one is rooted in 'perception' and measurement, not an actual intrinsic property of 'time'.  Feynman is not afraid of sharing where he is at, his thoughts for pedagogical purposes even if he doesn't reach a conclusion on the nature of time.

Time invariant 
He has another deep insight in the 6 easy pieces about the nature of time.  He mentions that the only invariant in the theory is the number of oscillations between respective processes.  If it takes a photon 1 phase oscillation and another process n oscillations, then if you define time as that number n (counting in unit of the photon process).  That ratio is an invariant even if the number representing the photon 'time' varies.

One is then lead to imagine that time is defined by that light process. A phase oscillation is the unit of time. Proper time is the time a process takes defined as a number of fundamental oscillations. A lot of the SR properties arise from that definition.

GR, acceleration and time.
The third insight very clearly presented is that an acceleration will induce a doppler effect on the time definition above.  As the source goes to you, the frequency is increased, as the source leaves the frequency is lowered. Time, this time 'perceived' by the observer at rest "seems" to slow down.  The source frequency hasn't changed, just like the siren of the doppler doesn't change, just the pitch we perceive.


Tuesday, June 19, 2012

THE NEW GLASS STEAGALL


This drawing is making the rounds on the intertubes and I find myself going "mmmm" when I look at it.  It stems from a understandable frustration with the banking system but gets the problem and the solution wrong imho.

Glass Steagall doesn't say anything about derivatives. 
Glass Steagall didn't really say anything about derivatives for the very simple reason that they didn't exist back then. Nor are they really the problem
From what I remember, GS was mainly about 2 things

 A/ Separation of commercial and investment banking
 The a/ part is pretty straight forward and says commercial banks shouldn't gamble with deposits. Once that gambling is removed then insurance of deposits makes sense. Clinton removed Glass-Steagall under lobbying pressure because large banks wanted to have speculative operations, like traditional investment banks, but also wanted to keep their commercial operations. Note that (for the most part) they were not gambling with deposits and were playing with their own capital. IT ISN'T WHAT CREATED THE CRISIS....

B/ Regulation of monetary levels 
The second part is not addressed in the drawing and the authors see vaguely through the fog, let me see if I can help. The GS regulation fixed the ratio of outstanding debt to reserves. This at heart is fractional reserve banking and was, in fine, a monetary tool that controlled the effective level of debt in the economy by leaving the reserve control with the central banks. The level of debt, and therefore risk in the economy, was ultimately controlled by the central banks. In practice it didn't work that way 1/ banks would emit debt first and then go to the markets to get reserves, which they always got. 2/ banks started unloading the debt in CDOs and recycling the cash thus raised in more debt. In other words, the monetary levels, aka debt, aka level of risk explored was out of control of the central banks. The risk level was too high (say 40x debt to equity). AND IT IS THIS PART THAT WAS PROBLEMATIC.

A super charged Minsky cycle.
The reason is a little thing called the minsky cycle, that says that left to its devices an economy will choose to always increase debt, because it increases prices of assets, which increases return of said assets, which increases return on equity, which increases demand for the debt which .... cycle again. This at first IS A VIRTUOUS CYCLE, resulting in a monetary boom, which, arguably was started under Reagan and "Reaganomics" (which really was a Keynesian program to spend on the military, the national labs etc, with results such as the downfall of communism and the birth of silicon valley as we know it). The problem is that this cycle DOES NOT SLOW DOWN BUT ACCELERATES until the moment where the debt servicing flow is ahead of the real income from the asset. And the problem is that the dynamics are then UNSTABLE IN REVERSE. There is a crash and the cycle restarts. This moment is known as 'fisher moment' and the cycle the 'minsky cycle'.

Abuses within the Minsky cycle, the naked CDS debacle
Whether the Minsky cycle is a positive or a negative is a larger debate (I would say positive actually, modulo the crashes). What was really objectionable in 2008 was some of the more exotic speculation around synthetic CDS and the like that multiplied the bad-debt when it started appearing in the system.

Dodd Frank addresses some of it.
The reason I go through all this background is that in fact the current legislation in fact starts to deal with these off balance sheet instruments. Notice that any legislation that is 'balance sheet' driven (which is the classical way to legislate by numbers) will miss "off balance sheet' instruments, almost by definition. To a large extent they were designed for this 'regulatory arbitrage'. The current legislation starts remeasuring the monetary flows by creating open and standardized markets for these "off the counter" (i.e. opaque) instruments. IT HAS NOTHING TO DO WITH GLASS STEAGALL. We may want to call it that for marketing purposes, however. The new legislation does not ban any product.

Most derivatives serve real world needs.
This last bit I will leave for another day, because it takes a lot of background and education before one can really go about judging these products, but let us just say that most derivatives serve real life purposes and that, by and large, they are beneficial to the economy. It is true however that naked CDS were abused for border line criminal activity. It is also true that Frank Dodd doesn't really try and legislate those and it is certainly true that it shouldn't try and legislate the broader derivatives. Those that call for a blank carpet over it usually have no idea what they are dealing with...

Saturday, June 2, 2012

Learning General Relavity and Differential Geometry.

Since my ski accident 2 month ago and the operation in the middle, I have been immobile for the past 2 mo and have another month to go.  In the downtime and instead of waving my arms, I have decided to learn General Relativity.

The motivation
I have been talking about this 'elastic world' for some time (and the past blog entry) but realized it was time to move from 'intuition' to 'mathematics'. What truly motivated me was that in the first week of downtime I came upon the realization that the anti-symmetric part of the strain tensor of space would give a trivial account of electro-magnetism.  This is fancy way of saying that Electro magnetism is the result of the torsion of space (rotation) and I don't mean at the hand waving level but that I could rebuild Maxwell equations with a couple of lines of calculation by identifying the ElectroMagnetic potential with the displacement and running some math.   In digging further, this book comes up.  It is highly underground in the sense that only it references itself, and the folks behind it are rather enigmatic, but in there is the full development of this mathematics of 'space as an elastic ball'.  Of course the language is not as pedestrian as what I like to use (swimming pools and elastics) and one will be met by talks of metrics, covariant derivatives representing an elastic space-time and the anti-symmetric part of the connection representing such an elastic space time mapping to the Faraday tensor. For the little story, the book was supposedly written by a kid of 16-18 years old.  I find it hard to believe that such a genius would exist but have done enough research on the kid to believe he MAY exist and that if he is real, he has serious health issues (which he alludes to in the book). I would not be surprised that such a genius would suffer mentally and physically.

In a way this is mean because it obscures the rather silly nature of what is behind it (that the universe is elastic) and that I can explain to any 12 year old.  But in another way, this is the only way forward.  So I want to understand that language and what the paper says in detail. In the book, you have the mathematical justification of how EM and Gravity unify and interact in a visual and mathematical framework.

The irony
But back to serious science. In theory, this should be a 'relearning' for me since, again in theory, I majored in theoretical physics and had to do a 'end of year thesis" on Einstein field equations. What a joke that was in retrospect.  I did not understand a single thing I was doing back then.  Both in the math and the physics department I learned more in 2 months of self study, than in 2 years of wasting time. I realize I was a virgin, a complete virgin and that the high grade education I received was purely a selection mechanism having little to do with the subject at hand but rather my capacity to regurgitate it.

A short criticism of the french educational system
For those that know me, you know I attended the most blue blood schools in France, the Ecole Polytechnique and the Ecole Normale.  For my american friends the Polytechnique is a cross between Harvard and MIT with a lot more social prestige.  And therein lies the rub. The social prestige is such that no one really works while there, or after really. And if one works, one works to become a civil servant. What a waste of talent and fine teaching.  There is also the question that most of us slack off in said schools once we have passed the very selective entrance exams (I know I certainly did) and cram a lot of study in 2 years paced by end of term exams.  As mentioned I made more progress in 2 month than in 2 years. I also took a lot more pleasure in learning the topic than I ever did back then.  Back then all I had was frustration, late night cramming, fear, and an immense sense of inadequacy, even though I ended up with a PhD. It was a waste of my time.  Those topics, at least for me, were not meant to be studied in the hurried and distracted setting of university but rather in the calm immobility of a broken leg.  I am a lot slower and at the same time a lot faster. It is hard to explain.

3 levels of understanding, visual, symbolic, components
The mathematical apparatus behind General Relativity is formidable. Very simply put it is downright intimidating (try the link above if you need any proof) but on many levels is actually rather trivial, once understood. The question is how to get to that understanding. Everything, at least to me, is extremely hard and takes sometimes a full day on a single page or an exercise. I use several books to get various points of view.  The outstanding one is the classic "Misner, Thorne, Wheeler" (MTW).  I love MTW because it is so complete and makes perfectly clear that the 3 levels of understanding of the topic are necessary.   1/ A pictorial view.  This is where we draw little picture of manifolds and tangent spaces and Lie derivatives and bla bla bla.  With an emphasis on bla bla bla, because words are silly at this level as most of the objects of study are rather trivial visually.  2/ A abstract, symbolic view.  This is the "object" layer of the understanding. It talks about vectors and forms and tensors and all the objects that inhabit our space. Sometimes the knowledge is trivially derived in this layer (Jacobi identity being one).  3/ A  component/index view.  This is what you see in the book above and in most literature.  It is the most rigorous treatment one can give but also the most obscure as the equations with index components look like 'index salad' and sometimes hide the trivial geometric meaning they are trying to convey and sometimes make perfectly TRIVIAL the relationships.  Most books shine in one or two layers.  Some touch a little bit of everything without going deep but are essential to my progress by gradually introducing me to the topic. I need all layers and many books in order to achieve any measure of understanding.

The art of coding: an approach to learning
In a way, even though I was trained in science and the deductive approach, the encounter with computers prepared me better for the topic.  I struggled with learning computer programming when I was younger with the fact I was not able to get the topic from one presentation and I kept wanting to get the ONE BOOK that would unlock the key for me. No one thing did it. In retrospect, it was a combination of approaches, practice and insights that hit me bit by bit.  At first I was looking for that linear introduction but ended up picking bit and pieces in different places until it slowly coalesced in 'knowledge'.  In retrospect there is no singular moment of divine inspiration but rather a long series of little gratifying insights that make all the pieces fall in place. That has been my experience with Differential Geometry as well. Every day I am afraid to go fight the mountain again (yet, totally excited at the same time) and every day I come away with a little bit of the mountain, somedays just a chip.  But over time, I know it will amount to the whole thing and that is what gives me hope. And if I fail, I am not afraid to, this will have been one of the hardest endeavors that I have undertaken.

Alchemy and the art of patience
The first difficulty, for me, then is a symbolic one.  You are met with a impenetrable wall of symbols.   I need to get to a level of fluency in the mathematical language spoken where I can understand what is being said in links like the one above. For "normal folks" like me, learning GR and DG is already a luxury and I hope to one day really claim I belong to the tiny club of people who truly 'get' GR but for others, the knowledge just appears by divine inspiration.  I am always grateful and in awe of such talent even if slightly jealous.  In my case, the knowledge doesn't fall from the sky, but rather requires a long and arduous path of self-study in obscure (if classic) books.  This study is probably the most rewarding thing I have done, both in its difficulty and the pleasure I derive from it.  It teaches patience in a way I have never learned in my academic or professional life where speed is valued above all (speed of learning, speed of execution).  It is also one of the biggest sources of physical pleasure in those moments when I realize I have made progress, the little release of endorphins when a chip comes away and a small insight achieved never gets old.  Each day feels like an infinite stretch of time where the progress feels infinitesimal and frankly almost null, the mountain is still there and feels almost untouched. Yet over time I realize the sum of it all gives me definite jumps.