Monday, September 21, 2015

Emergence and singular asymptotic expansions, II

When is a phenomena truly emergent?
Is there some objective quantitative criteria that one might use to decide?
This is an issue because sometimes discussions of emergence are pretty fuzzy and even flaky.

by Michael Berry that I mentioned in passing in a previous post.

I highly recommend the article as I think it has a very important insight: singular asymptotic expansions provide a concrete criteria for emergence.

Berry considers the specific problem:


He then discusses these examples in detail, including discussions of the asymptotic expansions.

I recommend reading this article before the one by Hans Primas (reviewed in the previous post) as the latter is more technical and philosophical than Berry's.

One thing I think this highlights is that the problem of emergence in quantum systems is neither more or less challenging or interesting than in classical systems, something I argued before.

I have one minor addition to Berry. In quantum many-body systems the singular parameter delta may not just be 1/N, where N = number of particles. It can also be the coupling constant, lambda.  Emergent phenomena are associated with non-perturbative effects. Concrete examples are in the BCS theory of superconductivity and the Kondo effect. In both there is an emergent energy scale  exp(-1/lambda). There is no convergent expansion in powers of lambda. Taylor series around lambda =0 is singular.

4 comments:

  1. Dear Ross,
    I'm Siddhartha Lal, a condensed matter theorist at the Dept. of Physical Sciences, IISER-Kolkata (India). I often read your posts and really enjoy them. The latest ones on emergence and singular asymptotic expansions were particularly thought provoking for me.

    I didnt know about the articles by Berry and Primas and enjoyed them thoroughly, but what really struck home was the comment you've made at the end of this post on how the (essential singularity carrying) gap scales of the emergent worlds of the Kondo and BCS problems are examples of emergence that arise from non-perturbative effects arising from a coupling constant.

    I had a couple of thoughts, observations and questions to bring up in this context. Very often, such essential singularity-type gap scales arise from simple mean-field approaches to various instabilities of the Fermi surface that typically ignore fluctuations by construction. Remarkably, it also turns out that the same gap scales can also be generated via perturbative renormalisation group methods that treat divergent fluctuations carefully. Classic examples are the poor-man's scaling for the Kondo problem, and the Benfatto-Galavotti-Shankar-Polchinski RG for the BCS instability of a spherical Fermi surface. I find it remarkable (but dont have any concrete answer on) how the two approaches give the same answer even though one ignores fluctuations while the other doesnt.

    Another point I've been troubled by is how to reconcile this non-perturbative gap scale with the way in which the Ginzburg-Landau theory is obtained from it. Consider the classic work of Gorkov for the connection between BCS and the Ginzurg-Landau approach to superconductivity: textbooks show that the gap relation is inverted, written in terms of $\ln(T/T_{C})$, and then expanded around $T-T_{C}$. The first term gives the coefficient of the leading term in the G-L theory whose change in sign signals the breaking of the U(1) phase rotation symmetry that leads to superconductivity.

    In the context of these singular asymptotic series, such an expansion seems troublesome to me. The closer we get to $T_{C}$, the more important the fluctuations and the worse-off we're with a G-L approach. Yet, this is where the Gorkov expansion would be best justified! I'm mystified. I'd deeply value hearing from you on these thoughts. I'd be especially grateful if you could point out whether I'm making some elementary mistakes with my line of thought.

    ReplyDelete
    Replies
    1. Dear Siddhartha,

      Thanks for your thoughtful comment.
      I am glad you liked the Berry and Primas articles. I wish I had read them long ago.

      Your observation that the emergent non-perturbative energy scales appear in both mean -field variational approaches and careful perturbative RG ones is interesting. I had not thought of this before. I agree that it is puzzling and wish I understood this. Maybe someone else can provide some insight.

      I think I may have an answer on your second question/puzzle. If one starts with the BCS mean-field equations one can follow Gorkov and derive the Ginzburg-Landau mean-field theory of superconductivity, provided one is close to Tc. However, we also know from the RG theory of phase transitions that as one approaches Tc, the mean-field GL theory becomes unreliable. Isn't this inconsistent?

      I would look at it this way. Using the Ginzburg criterion one can estimate how close to Tc one must go for the fluctuations to get large and mean-field theory to become unreliable. For conventional superconductors, this is a very narrow temperature range (milliKelvin?) and so provided one does not go this close to Tc, both GL and BCS mean-field theories are reliable and so it is quite meaningful to do what Gorkov did.

      I hope this helps.

      Delete
  2. Dear Ross,
    Siddhartha here once again. Thanks for getting back. I had a couple of more observations to make on both points.

    On the first point. I have, unfortunately, never come across any work that clarifies how both mean-field theory and RG can lead to the same result for the gap equation. However, one thing is clear: both approaches rely on a divergence of the coupling under consideration in order to get the gap equation. This is usually attributed to:

    a. an instability of the zeroth (i.e., non-interacting) theory

    b. the opaque nature of the self-consistency in mean-field theory; it doesnt provide much physical insight on how it converged on a result, and

    c. the perturbative nature of the RG procedure.

    I believe, however, that the restoration of sanity should come from an RG calculation that stops at a stable fixed point where the coupling remains at a finite value. This RG calculation would, also, hopefully reveal how mean-field theory got the same answer without manifestly treating fluctuations. Sadly, I've never seen a published account of any such RG calculation for, e.g., the BCS problem.

    On the second point. I agree that the Ginzburg criterion allows the G-L theory to come really close to T_C. This would surely allow for comparisons of this mean-field approach with experimental measurements of various transport and thermodynamic quantities lying within the Ginzburg approximation temperature regime. However, from the viewpoint of critical phenomena, I continue to remain concerned. Surely, even in 3+1D, we are at the upper critical dimension. This would suggest that the corrections to the mean-field critical exponents are likely to be quite large when we take into account the fluctuations as we get closer to T_C.

    The essential singularity nature of the gap equation is suggestive of the fact that we will have to take into account fluctuations to all orders to get the real answer (and not just quadratic fluctuations, a la RPA). This only leads back to the fact that we probably have to do a RG calculation to get the true picture.

    I'm beginning to converge on the possibility that the amazing success of the G-L mean-field theory likely arises from the fact that the basin of attraction of the eventual RG stable fixed point must extend really close to the critical fixed point. This would probably justify what lies at the heart of the mean-field method, i.e., replacing the effect of divergent fluctuations by an expectation value in order to get the gap equation through the self-consistency method.

    ReplyDelete
  3. Hi Ross, allow me to chip in with some commentaries. I would like to apologize in advance for my usual verbiage.

    Although I appreciate Berry's attempt at being comprehensive I do think that it is a bit naive and throws a lot of different stuff in the same bucket.

    Regarding the transition from special relativity to classical mechanics it is certainly true that the Lorentz Group to Galilean Group is a discontinuous transition. Moreover, the Galilean Group cannot even be implemented in a regular four dimensional manifold, but the correct spacetime description is in terms of fibre bundles (Geroch's relativity from A to B discusses this w/out all the math). This does give rise to interesting consequences, such as the fact that rigid bodies cannot be defined in relativity, only in classical mechanics (sort of like babinet's principle prohibits the definition of shadows in wave optics). Nevertheless the transition is smooth in the sense that all observables change values smoothly under the limit. A usufeul analogy is that the SE(2) group of translations and rotations in the plane is a non-smooth deformation of SO(3), but the representantions of SE(2) are smooth deformations of the SO(3) ones.

    Regarding Quantum Mechanics, I just disagree. There are loads of examples of different quantum mechanical systems with the same classical limit (my favorite one is tha hamiltonian H=q \cosh p, which classically is just the harmonic oscillator but quantum mechanically is not even bounded from below) or classical systems that do not admit a hamiltonian and therefore do not have a corresponding naive quantum system (like a general dissipative system). Therefore the quantum-to-classical transition cannot be simply Planck's constant going to zero. At least I guess this is the moral from decoherence and such things.

    Regarding thermodynamics, that's where one should focus. From textbook lessons we learn that temperature is really an emergent concept which, as you said in the previous post, is essential to even define equilibrium. On the other hand things like Seebeck Effect furnishes us with a thermodynamical equilibrium in a temperature gradient, showing that temperature is more resilient a concept than it appears. Stellar astrophysics gives us vanilla statistical mechanics in which the specific heat is often negative. I think that this is because the 1/N limit is much more subtle but also because not only this gives you Callen style thermodynamics, but it is important to have walls (especially adiabatic ones) to define your sytem. Otherwise statistical mechanics and thermodynamics may still apply but a bit quirky.

    Summarising, I think Berry's a bit hasty in the discussion. Certainly the non-relativistical limit gives us new structructures, but it is otherwise fairly well behaved. The quantum-to-classical I'm not even sure how to describe in terms of limits. But thermodynamics seem a bit special, in that the emergent concepts survive even in situations where we don't expect them to, so it seems by far the more interesting and clear case of whatever emergence is.

    Sorry again for the big text,
    Best,
    Cesar Ulaina

    ReplyDelete

From Leo Szilard to the Tasmanian wilderness

Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...