Thursday, April 30, 2020

The fascinating story of the Ising model

Great progress in our understanding of condensed matter physics has been made by the proposal and investigation of concrete models that are simple enough, to be amenable to mathematical analysis or to simulation on a computer, but complicated enough that they can capture the essential details of phenomena in a real material. This approach involves defining the energy of the whole system as a function of the different possible microscopic configurations of the system. Using a well-established theory called statistical mechanics it is possible, at least in principle, to connect the macroscopic properties of the system, such as how the heat capacity varies with temperature, to the possible microscopic configurations of the system.

The Ising model is a paradigm for this modeling approach. Every year thousands of journal articles are published that involve Ising models. They are applied to a wide range of topics in physics, chemistry, neuroscience, biology, sociology, and economics. In a magnetic system each magnetic atom is represented in the model by a single square box which can have two possible states, here represented as black or white, corresponding to two possible directions, such as ‘’up’’ or ``down’’ for the magnetic moment of the atom. These two directions can also be viewed as representing whether the magnetism of the atom is parallel or anti-parallel to the direction of an external magnetic field. In the model there is an energy gain when two adjoining boxes are in the same state, i.e. they are both black or both white. This energy gain captures a tendency towards ferromagnetism, i.e. where the direction of all the atomic magnets align with each other. In the absence of an external magnetic field, there is equal probability of a box being black or white. The random jiggling associated with the temperature of the system means the state of individual boxes are constantly changing. Figure 6.4. shows examples of likely states of the system for three different temperatures. Note that at very low temperatures, there is little jiggling, and the most likely state of the system is one where it is all black or all white. This is an example of the spontaneous symmetry breaking discussed in chapter 3, since in the model there is no preference for black or white.

The Ising model was originally proposed in 1920 by Wilhelm Lenz (1888-1957), a Professor at Hamburg University in Germany. He suggested investigation of the model for the doctoral research of a student, Ernst Ising (1900-1998).  In his thesis Ising was able to solve the one-dimensional version of the model exactly, i.e. calculate all the properties of the model without making any approximations in the mathematical analysis. He found that even for very low temperatures the model never undergoes a phase transition to an ordered ferromagnetic state. He also gave a rough argument that this would also be true in two and three dimensions. This led to doubts as to whether the model could describe the phase transition that occurs in ferromagnets such as iron, where at a critical temperature there is a transition to a state with no long-range magnetic order. However, Lars Onsager (1903-1976) performed a mathematical tour de force and in 1944 published an exact solution to the problem for a two-dimensional square lattice. The model did have a critical point at a non-zero temperature and Onsager calculated the critical exponents for the model. They were different from Landau’s theory (Chapter 3), raising questions about the validity of Landau’s theory.

The academic careers of the participants in this story are interesting. Lenz became an influential leader in theoretical physics in Germany. One of his other doctoral students, J. Hans D. Jensen (1907-1973) shared the Nobel Prize in Physics in 1963 for his work in theoretical nuclear physics. Although Ising is a household name in physics he did not go onto a distinguished academic career. After his doctorate, he worked in Germany as a high school teacher but lost his job because he was Jewish. He fled to Luxembourg and worked as a shepherd and a railway worker, before emigrating to the USA in 1947. He then worked until retirement teaching physics at a small university and did not resume research. He lived in Peoria, Illinois, which incidentally has become a metaphor for mainstream U.S.A., embodied in the question, ``Will it play in Peoria?"

Onsager was a Professor at Yale University and was awarded the Nobel Prize in Chemistry in 1968 for work on irreversible thermal processes. Yet, it is doubtful that Onsager would have survived in today’s ``publish or perish’’ academic culture. He only published one or two papers each year, and some were only a few pages long. He was also slow to publish. Often he would announce his new results at a conference, and then others would reference them in their own papers, and several years later Onsager would finally publish them. For example, he announced his solution of the Ising model two years before it was published. But many of his papers were seminal. Onsager was also known as being difficult to understand, even by brilliant colleagues graduate students. Before Onsager was hired by Yale, he was fired by both Brown University and John Hopkins University because his teaching was so poor.

Onsager’s solution of the Ising model provided a concrete example of an important idea: short-range interactions can lead to long-range order. Prior to Onsager’s solution, not everyone was convinced that this was true. In the Ising model each square (magnetic atom) only interacts directly with its nearest neighbours, i.e. the interactions are short-range. Yet in spite of  all the jiggling the whole system can form a state where changing the state of one atom is correlated with that of another atom infinitely far away, i.e. the system has long-range order. In different words, the system has rigidity, just like how pushing an atom on one side of a solid object will force even the atoms on the other side of the object to move.

Computer simulation of a two-dimensional Ising model as it passes through a critical point. The system shown here consists of a grid of 124 x 124 small boxes. Each box can be black or white. The probability of a box being black or white depends on the temperature and on whether its nearest neighbours are black or white. The left, centre, and right panels show a snapshot of a likely configuration of the system at a temperature less than, equal to, and greater than the critical temperature, Tc, respectively. Below the critical temperature (left panel) one sees the formation of large magnetic domains. There is more white than black representing spontaneous symmetry breaking. At the critical temperature (centre panel) there are equal numbers of black and white but there are paths through the whole system that pass through purely white or black, and so the range of correlations between squares becomes very large. Above the critical temperature, one cannot construct such paths, and the range of correlations is very short.

The figure is taken from this neuroscience paper.

Tuesday, April 28, 2020

Sir John Houghton (1931-2020): climate scientist

I was sorry to hear that Sir John Houghton died on April 15, from complications associated with coronavirus. There is a nice obituary in The New York Times.
He was appointed to an array of distinguished and influential positions, including Professor of Physics at Oxford, Director of Rutherford Appleton Lab, Director of the UK Meteorological Office, and most significantly, lead editor of the first three reports of United Nationas Intergovernmental Panel on Climate Change (IPCC).
I highly recommend his autobiography, In the Eye of the Storm, which I blogged about a few years ago, highlighting his integrity and influence.

Friday, April 24, 2020

Mean-field theories for COVID-19: helpful or misleading?

So what does this have to do with COVID-19?
Mean-field theory is actually in the news a lot right now and influencing discussions at the highest level of government. It is used in many of the mathematical models in epidemiology involved in discussions about ``flattening the curve.'' An example is the paper from the Doherty Institute at the University of Melbourne that had a significant influence on Australian government policy.
In an article in The Guardian AustraliaKathryn Snow has given a nice discussion for a general audience of the role and limitations of modeling  and there is a New York Times article comparing five different models for the spread of COVID-19.

Many of the models being used are generalisations of the simplest SIR model.
A basic introduction is found in a course at ETH-Zurich, from which the figures below are taken.
My UQ colleague, Zoltan Neufeld also gave a clear and helpful introduction in a seminar, he recently gave for the School of Mathematics and Physics at UQ. [Video is here].
The population is divided into three groups: susceptible (S), infected (I)  and recovered/dead (R).

This leads to a set of three coupled non-linear differential equations 

The model provides many qualitative insights.
A key parameter is the R0, the basic reproduction number, which for this model is.

If R0 is larger than one there will be an epidemic, initially the number of people infected will grow exponentially, and eventually a finite fraction of the population will be infected. If R0 is less than one the number of infected people decreases exponentially.

The solution to these equations for different parameter values gives a feel for how quantities such as the fraction of the population that get infected and the duration of the epidemic depend on parameters such as R0. That is what underlies discussions about ``flattening the curve''.

The Doherty Institute paper is a generalisation of this model where the population is divided into 15 different groups (such as quarantined and non-quarantined individuals) and there are fifteen parameters and fifteen first-order differential equations.

But what are the assumptions in the SIR model?
Foremost, it is a mean-field theory.
It assumes that in the population there is homogeneous mixing. You could think of the system being like a gas or liquid, which is composed of a uniform mixture of particles of three different types: infected, susceptible, and recovered. They collide at random with one another, just like molecules in a fluid, and there is a fixed probability (collision cross-section) for the species of a particle to change after a collision.

Consider different ways this mixing assumption can break down in the real world.

1. The parameters in the model may be different for different people and for different communities. 
I find it misleading that people say, whether in science papers or in newspaper articles, that the R0 for COVID-19 is 2.6 plus or minus 0.2. Surely, the value is context dependent and model dependent. Doesn't it depend on the number and type of contacts that people have? It should be different for the western suburbs of Brisbane, a slum in Bangkok, Beijing, and the Bronx in New York City.
If the model parameters are stochastic, i.e. drawn from a probability distribution, are there qualitative changes in behaviour.

2. The real world has spatial structure and discrete structure. The discrete structure is taken into account in agent-based models such as those in NetLogo, which has a nice simulation Virus.

I downloaded Netlogo last year and used it for a talk on emergence and international relations. It is very easy to use and has lots of cool programs. By varying parameters you can see a lot of cool phenomena.

3. Network effects

Netlogo has a nice program Virus on a network

Network models can include the role of super-spreaders: a few highly connected individuals who spread the virus.

There is a nice review
Epidemic processes in complex networks 
Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, Alessandro Vespignani

Of particular interest (and concern) are scale-free networks, for which there is no epidemic threshold. In different words, regardless of the value of R0, an epidemic will always occur, and spread extremely rapidly.

Aside. The last author of the review is leading a group working on COVID-19 at Northeastern University, and featured in a New York Times article last week.

Wednesday, April 22, 2020

Mean-field theories: helpful or misleading? From Hubbard to COVID-19 models

Mean-field theory (self-consistent field theory) is incredibly valuable. It gives significant insights into what is possible with a particular model.
What kind of phases and broken symmetries may be possible?
How does the phase diagram depend on different parameters in a model?
Indeed, mean-field theory is the basis of the whole Landau paradigm for spontaneous symmetry breaking and phase transitions.
Implementations of Density Functional Theory (DFT) in computational materials science are basically mean-field theories. Most of computational quantum chemistry involves some sort of mean-field theory.

Mean-field theories do not take into account fluctuations, dynamic or spatial.
Basically, a many-body problem is reduced to a one-body problem.

A good mean-field theory can win you a Nobel Prize. That's what Anderson, BCS, Ginsberg, Abrikosov, and Leggett all did!
Can you think of others?

However, mean-field theory does have its limitations.
It is usually quantitatively wrong. It often gives unreliable values for transition temperatures. In spatial dimensions less than four, mean-field theory gives the wrong values for the critical exponents near a phase transition.

An even bigger problem is that mean-field can be qualitatively wrong.
For many models (e.g. the Ising model or Heisenberg model) mean-field theory always gives a transition from a disordered to an ordered phase at a non-zero temperature.
However, in one dimension the Ising model has no phase transition in one dimension. For a Heisenberg ferromagnet or antiferromagnet, there is no transition at finite temperature in two dimensions.
The Mermin-Wagner theorem states that in two dimensions a superconductor or superfluid never has long-range order at finite temperature. Instead, there is a Kosterlitz-Thouless transition, to a distinct state of matter, with power-law correlations.

Mean-field theory can also fail to predict the existence of states of matter. For example, for Hubbard models, mean-field theory can produce several states: a Fermi liquid metal, a ferromagnetic metal, an antiferromagnetic metal, and a spin-density-wave insulator. But it is quite possible the model also can have non-magnetic Mott insulating phases, superconductivity, non-Fermi liquid metals, and pseudogap states.

In the next post, I will discuss some issues that arise in mean-field theories used in modeling the COVID-19 epidemic.

Tuesday, April 14, 2020

Phil Anderson (1923-2020): theoretical physicist extraordinaire

Phil Anderson died two weeks ago. There have been many obituaries, including at The New York Times, Not Even Wrong (Peter Woit), and Nanoscale Views (Doug Natelson). Few would argue that he was the greatest condensed matter theorist of the second half of the twentieth century. I would go further and suggest that he and Ken Wilson were the greatest theoretical physicists of the second half of the twentieth century. Anderson's scientific legacy extends far beyond condensed matter physics.

More than sixty posts on this blog include ``P.W. Anderson'' in the label. There is no doubt that Anderson is the largest intellectual influence on this blog.

Phil Anderson made incredibly diverse and valuable contributions to condensed matter physics (anti-ferromagnetism, localisation, weak localisation, magnetic impurities in metals, Kondo problem, poor mans scaling, superfluid 3He, spin liquids, RVB theory of superconductivity... ).

It is noteworthy that Anderson applied scaling to condensed matter before Wilson. In the late 1960s he wrote a series of papers on ``poor man's scaling" for the Kondo problem.

I can think of several significant and profound influences of Phil beyond condensed matter physics.

1. Codifying and elucidating the concept of emergence (and the limitations of reductionism) in all of science, in More is Different in 1972.
[Although it should be acknowledged that the word ``emergence'' does not appear in the article and that Michael Polanyi developed similar ideas about emergence earlier.]

2. Nambu referenced several papers by Anderson about superconductivity in his seminal papers on the mass of elementary particles and symmetry breaking.

3. Laying the groundwork for the Higgs boson in 1963 by connecting spontaneous gauge symmetry breaking and mass. 

4. Elucidating spin glasses in a way that was key to John Hopfield's development of a particular neural network and to the notion of a "rugged landscape", relevant in protein folding and evolution. Anderson described these connections nicely in two pages in Physics Today in 1990.

Phil had a significant influence on my own job/career trajectory. For my Princeton Ph.D. I worked with Jim Sauls on superfluid 3He, which Phil supported financially. He was on the committee for my Ph.D. thesis defense in 1988. In 1993, towards the end of a postdoc, my job prospects were extremely slim. Phil told me that he had been asked to review an application I made for a five-year research fellowship back in Australia. My success was probably based on a positive review from Phil. I regret that during my time as a graduate student I did not have the confidence to interact much with him. However, from about 1995 to 2002, I made a visit to Princeton practically every year and had some nice discussions with him. It was also fascinating to see the close personal and scientific relationship that Phil and N.P. Ong had; it was clearly mutually very beneficial.
One cryptic comment: ``look at the metal-insulator-metal tunneling theory from the 1960s" [I found Mahan has a nice discussion] set me on the right path to do the calculations in this paper, about angle-dependent-magnetoresistance oscillations in layered metals.

I highly recommend the Anderson anthologies (reprint collections), listed below in order of increasing technical difficulty.

More and Different: notes from a thoughtful curmudgeon.
It is a collection of essays on wide-ranging subjects: personal reminiscences, history, philosophy, sociology, science wars, ...
Some of these have been published before but many have not.

A Career in Theoretical Physics
Something amazing about this collection of papers is what is not in it; e.g. his papers on superfluid 3He with Brinkman, or on charge ordering and antiferromagnetism in ferrites.

Basic Notions of Condensed Matter Physics

Andrew Zangwill is working on a scientific biography of Phil Anderson. I am looking forward to reading.

Monday, April 6, 2020

Emergence and the pandemic

I love this video. I also found very helpful an article in The Economist, Anatomy of a killer, that gives a basic introduction to the biology. [Unfortunately, it is behind a pay-wall. I subscribe to the hard copy, which I highly recommend.]. The New York Times also has a helpful tutorial How coronavirus hijacks your cells.

So what do a new virus, an epidemic, social distancing, and panic buying have in common?
They are all examples of emergent phenomena as they all have three particular properties.

First, each phenomenon involves a system with many interacting components.

Second, the system possesses a property, an ability to exhibit a specific phenomenon, that the individual components of the system do not have.

Third, the phenomenon is hard to predict, even with a knowledge of the details of the system components and of the interactions between the different components.

Consider the four examples I gave.

An epidemic arises when a few people are infected with a virus who in turn infect others who infect more people until a significant fraction of the whole population is infected. With a single human or even a few, the concept of an epidemic does not make sense.
Even though we do know a lot about epidemiology it's very hard to predict the scale of an epidemic and to decide on the most effective measures to ``flatten the curve.’’
Associated with epidemics there are emergent concepts such as tipping points (R0 larger than 1), super spreaders, and herd immunity [Scott Page gives a nice 9 minute lecture on this].

The concept of panic buying does not make sense if there is only a single customer. The phenomenon arises not just from the actions of one shopper or even a group of shoppers. Individual shoppers in a store don’t just interact with each other in a single shop but also interact with their social and informational networks.  Who would have predicted that we would see such silly things as panic buying of toilet paper?

Many of us had not heard of social distancing until this year. At first, you might think that social distancing is just something that arises from a government regulation, i.e., it is  ``top-down ‘’ rather than ``bottom-up’’. However, it occurs as a result of interaction between all the individuals in the society and with scientific advisors and then the government, but this does not mean that they will and we do see this in certain countries certain cultures and in certain demographics even though
If the government ordains social distancing, it does not mean it is practiced. Certain cultures and demographics will not follow government edicts. Rather, society self-organises to produce social distancing. Some people practice it, voluntarily or in response to a government edict, others see them doing it and then they follow. You can go to the park and you see people only talking in pairs and more than two metres apart and so then you're more likely to practice it.

 SARS-CoV-2 is a new virus. As far as we are aware it did not exist previously in humans. New viruses emerge through evolution. [There is a nice video from Stated Clearly ] 
Coronaviruses are common in animals and can gradually mutate in interaction with their environment. At some point this virus crossed the species barrier to humans; it is still adapting to its environment. There are many components of the system; the individual viruses don't interact with each other but with their environments.

A single virus also has emergent properties (its ability to infect specific cells, reproduce itself, and to survive inside a water droplet). Central to a single SARS-CoV-2 virus particle is an RNA molecule with about 30,000 base pairs.  All of those together provide the genetic information that is used to reproduce. Having the individual base pairs, or a subset of them, or the RNA without the six
proteins and membrane.  Knowing all the genetic information is useful but cannot necessarily be used to predict the structure of the virus, its function, or how to develop a vaccine.

Why does an emergent perspective matter? 
From a purely scientific point of view, there are many interesting and fascinating phenomena that would be nice to understand, from the biochemistry of a single virus to the spread of the virus by international travel. If we understand these phenomena better, particularly at all the different scales discussed below, then we have a better chance of taking effective action to stop the spread of the virus, whether it is as individuals washing their hands, practicing social distancing, government policies, or development of vaccines.

There are many scales to the pandemic problem: length scales, time scales, and number scales.
The distance scales cover a range of about 15 orders of magnitude.
A single virus particle is about 10 nanometers in diameter (10^-8 m). A cell in the human respiratory tract is about 100 times larger.
Then we can keep on going up to 10,000 km (10^7 m), the distance that some people flew to carry the virus around the world.
The range of timescales is from microseconds (?) for a single virus particle to attach itself to a human cell, up to several hours to produce thousands of copies of the virus inside that cell, to the weeks for infected individuals to develop symptoms, to the time for new government policies to take effect.
And, the time scales many are particularly interested in: how long we will be in self-isolation? how long until the economy ``recovers''?
The numbers range from reproduction numbers for a single virus in a single cell, numbers infected from a single human to the millions of people infected, to the trillions of cells in a human body.

There is a stratification of reality or hierarchy associated with these different scales to length, time, and number.
RNA, Virus, Cell, immune system, organs, individual human, individual’s (physical) social contacts, city, country.

An emergent perspective is helpful in at least three ways.

First, it highlights the limitations of reductionism. Even if we know the details of the individual components of a system and how they interact with each other that does not necessarily mean we have an understanding of the properties of the whole system. For example, we already know the nucleic acid sequence of the RNA for SARS-CoV-2, the associated genes, and the physical structure of a single virus particle, including its six proteins.
This is helpful and wonderful. However, it does mean we really understand the virus, including how to develop a vaccine. Knowing the genome does not enable the prediction of the structure of the virus.  This is similar to the problem of predicting a protein structure from a knowledge of its amino acid sequence.
In biology there is a helpful and common paradigm: structure determines property which determines function. This is why there is so much emphasis on protein structure determination.
But knowing the structure does not always enable us to predict the property and particularly the function. Function is an emergent property.

Second, an emergent perspective highlights the tension between universality and particularity.
For example, COVID-19 is one of the hundreds in the coronavirus family. They have quite similar structures and properties. But this coronavirus is very particular in a devastating way. Small changes in the genetic code or proteins could make it even more dangerous, or impotent. 

Thirdly, an emergent perspective highlights the significance of the stratification of reality. At each stratum there are unique entities, phenomena, concepts, techniques, and theories. This is the origin of different scientific disciplines.
Observing phenomena at one stratum does not reveal what is going on at a lower stratum.
This is what Laughlin and Pines call the ``protectorate''. 
This applies whether considering a single RNA molecule, a virus particle, a respiratory cell, or groups of shoppers.
Panic buying is unique to the stratum of groups of consumers. The underlying causes from the psychology of individuals and groups are hidden.
Studies of consumer behaviour provide no insights for immunology and visa versa.

A pandemic and its aftermath is a wicked problem: it is complex and difficult to solve.
There is some intellectual beauty in the multi-disciplinarity of the problem; it involves biochemistry, cell biology, immunology, medicine, public health, sociology, psychology, politics, economics, and mathematical modeling. It does not even end there as the humanities come in to play. Responses to the crisis, from individuals to governments, involve fundamental philosophical and theological questions about ethics, values, meaning and purpose, suffering and death.