Condensed concepts
Ruminations on emergent phenomena in condensed phases of matter
Friday, November 15, 2024
Emergence and protein folding
Wednesday, October 30, 2024
A very effective Hamiltonian in nuclear physics
Atomic nuclei are complex quantum many-body systems. Effective theories have helped provide a better understanding of them. The best-known are the shell model, the (Aage) Bohr-Mottelson theory of non-spherical nuclei, and the liquid drop model. Here I introduce the Interacting Boson Model (IBM), which provides somewhat of a microscopic basis for the Bohr-Mottelson theory. Other effective theories in nuclear physics are chiral perturbation theory, Weinberg's theory for nucleon-pion interactions, and Wigner's random matrix theory.
The shell model has similarities to microscopic models in atomic physics. A major achievement is it explains the origins of magic numbers, i.e., nuclei with atomic numbers 2, 8, 20, 28, 50, 82, and 126 are particularly stable because they have closed shells. Other nuclei can then be described theoretically as an inert closed shell plus valence nucleons that interact with a mean-field potential due to the core nuclei and then with one another via effective interactions.
For medium to heavy nuclei the Bohr-Mottelson model describes collective excitations including transitions in the shape of nuclei.
An example of the trends in the low-lying excitation spectrum to explain is shown in the figure below. The left spectrum is for nucleus with close to a magic number of nuclei and the right one for an almost half-filled shell. R_4/2 is the ratio of the energies of the J=4+ state to that of the 2+ state, relative to the ground state. B(E2) is the strength of the quadrupole transition between the 2+ state and the ground state.
The IBM Hamiltonian is written in terms of the most general possible combinations of the boson operators. This has a surprisingly simple form.
Note that it involves only four parameters. For a given nucleus these parameters can be fixed from experiment, and in principle calculated from the shell model. The Hamiltonian can be written in a form that gives physical insight, connects to the Bohr-Mottelson model and is amenable to a group theoretical analysis that makes calculation and understanding of the energy spectrum relatively simple.
Central to the group theoretical analysis is considering subalgebra chains as shown below
An example of an energy spectrum is shown below.
The fuzzy figures are taken from a helpful Physics Today article by Casten and Feng from 1984 (Aside: the article discusses an extension of the IBM involving supersymmetry, but I don't think that has been particularly fruitful).
The figure below connects the different parameter regimes of the model to the different subalgebra chains.The different vertices of the triangle correspond to different nuclear geometries and allow a connection to Aage Bohr's model for the surface excitations.
This is discussed in a nice review article, which includes the figure above.
Quantum phase transitions in shape of nuclei
Pavel Cejnar, Jan Jolie, and Richard F. Casten
Aside: one thing that is not clear to me from the article concerns questions that arise because the nucleus has a finite number of degrees of freedom. Are the symmetries actually broken or is there tunneling between degenerate ground states?
Tuesday, October 22, 2024
Colloquium on 2024 Nobel Prizes
This friday I am giving a colloquium for the UQ Physics department.
2024 Nobel Prizes in Physics and Chemistry: from biological physics to artificial intelligence and back
The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Half of the 2024 Chemistry prize was awarded to Dennis Hassabis and John Jumper for “protein structure prediction” using artificial intelligence. I will describe the physics background needed to appreciate the significance of the awardees work.
Hopfield proposed a simple theoretical model for how networks of neurons in a brain can store and recall memories. Hopfield drew on his background in and ideas from condensed matter physics, including the theory of spin glasses, the subject of the 2021 Physics Nobel Prize.
Hinton, a computer scientist, generalised Hopfield’s model, using ideas from statistical physics to propose a “Boltzmann machine” that used an artificial neural network to learn to identify patterns in data, by being trained on a finite set of examples.
For fifty years scientists have struggled with the following challenge in biochemistry: given the unique sequence of amino acids that make up a particular protein can the native structure of the protein be predicted? Hassabis, a computer scientist, and Jumper, a theoretical chemist, used AI methods to solve this problem, highlighting the power of AI in scientific research.
I will briefly consider some issues these awards raise, including the blurring of boundaries between scientific disciplines, tensions between public and corporate interests, research driven by curiosity versus technological advance, and the limits of AI in scientific research.
Here is my current draft of the slides.
Saturday, October 19, 2024
John Hopfield on what physics is
A decade ago John Hopfield reflected on his scientific life in Annual Reviews in Condensed Matter Physics, Whatever Happened to Solid State Physics?
"What is physics? To me—growing up with a father and mother who were both physicists—physics was not subject matter. The atom, the troposphere, the nucleus, a piece of glass, the washing machine, my bicycle, the phonograph, a magnet—these were all incidentally the subject matter. The central idea was that the world is understandable, that you should be able to take anything apart, understand the relationships between its constituents, do experiments, and on that basis be able to develop a quantitative understanding of its behavior.
Physics was a point of view that the world around us is, with effort, ingenuity, and adequate resources, understandable in a predictive and reasonably quantitative fashion. Being a physicist is a dedication to the quest for this kind of understanding."
He describes how this view was worked out in his work in solid state theory and moved into biological physics and the paper for which he was awarded the Nobel Prize.
"Eventually, my knowledge of spin-glass lore (thanks to a lifetime of interaction with P.W. Anderson), Caltech chemistry computing facilities, and a little neurobiology led to the first paper in which I used the word neuron. It was to provide an entryway to working on neuroscience for many physicists..."
After he started working on biological physics in the late 1970s he got an offer from Chemistry and Biology at Caltech and Princeton Physics suggested he take it.
"In 1997, I returned to Princeton—in the Molecular Biology Department, which was interested in expanding into neurobiology. Although no one in that department thought of me as anything but a physicist, there was a grudging realization that biology could use an infusion of physics attitudes and viewpoints. I had by then strayed too far from conventional physics to be courted for a position in any physics department. So I was quite astonished in 2003 to be asked by the American Physical Society to be a candidate for vice president. And, I was very happy to be elected and ultimately to serve as the APS president. I had consistently felt that the research I was doing was entirely in the spirit and paradigms of physics, even when disowned by university physics departments."
Saturday, October 12, 2024
2024 Nobel Prize in Physics
I was happy to see John Hopfield was awarded the Nobel Prize in Physics for his work on neural networks. The award is based on this paper from 1982
Neural networks and physical systems with emergent collective computational abilities
One thing I find beautiful about the paper is how Hopfield drew on ideas about spin glasses (many competing interactions lead to many ground states and a complex energy landscape).
A central insight is that an efficient way to store the information describing multiple objects (different collective spin states in an Ising model) is in terms of the inter-spin interaction constants (J_ij's) in the Ising model. These are the "weights" that are trained/learned in computer neural nets.
It should be noted that Hopfield's motivation was not at all to contribute to computer science. It was to understand a problem in biological physics: what is the physical basis for associative memory?
I have mixed feelings about Geoffrey Hinton sharing the prize. On the one hand, in his initial work, Hinton used physics ideas (Boltzmann weights) to extend Hopfields ideas so they were useful in computer science. Basically, Hopfield considered a spin glass model at zero temperature and Hinton considered it at non-zero temperature. [Note, the temperature is not physical it is just a parameter in a Boltzmann probability distribution for different states of the neural network]. Hinton certainly deserves lots of prizes, but I am not sure a physics one is appropriate. His work on AI has certainly been helpful for physics research. But so have lots of other advances in computer software and hardware, and those pioneers did not receive a prize.
I feel a bit like I did with Jack Kilby getting a physics prize for his work on integrated circuits. I feel that sometimes the Nobel Committee just wants to remind the world how physics is so relevant to modern technology.
Ten years ago Hopfield wrote a nice scientific autobiography for Annual Reviews in Condensed Matter Physics,
Whatever Happened to Solid State Physics?
After the 2021 Physics Nobel to Parisi, I reflected on the legacy of spin glasses, including the work of Hopfield.
Aside: I once pondered whether a chemist will ever win the Physics prize, given that many condensed matter physicists have won the chemistry prize. Well now, we have had an electronic engineer and a computer scientist winning the Physics prize.
Another side: I think calling Hinton's network a Boltzmann machine is a scientific misnomer. I should add this to my list of people getting credit for things that did not do. Boltzmann never considered networks, spin glasses or computer algorithms. Boltzmann was a genius, but I don't think we should be attaching his name to everything that involves a Boltzmann distribution. To me, this is a bit like calling the Metropolis algorithm for Monte Carlo simulations the Boltzmann algorithm.
Monday, October 7, 2024
Mental Health for Academics
Tomorrow I am giving a talk "Mental health for academics" for the ARC Centre for Engineered Quantum Systems as part of Mental Health Week.
Here is a video recording of my planned talk. As an experiment, I did a record practice versions of my talked and uploaded it on YouTube. Feedback both on content and the technology welcome.
Here are the slides.
A resource I mention at the end is the blog Voices of Academia, set up by Marissa Edwards from UQ.
Thursday, September 26, 2024
The multi-faceted character of emergence (part 2)
In the previous post, I considered five different characteristics that are often associated with emergence and classified them as being associated with ontology (what is real and observable) rather than epistemology (what we believe to be true).
Below I consider five more characteristics: self-organisation, unpredictability, irreducibility, contextuality and downward causation, and intra-stratum closure.
6. Self-organisation
Self-organisation is not a property of the system but a mechanism that a theorist says causes an emergent property to come into being. Self-organisation is also referred to as spontaneous order.
In the social sciences self-organisation is sometimes referred to as an endogenous cause, in contrast to an exogenous cause. There is no external force or agent causing the order, in contrast to order that is imposed externally. For example, suppose that in a city there is no government policy about the price of a loaf of sliced wholemeal bread or on how many loaves that bakers should produce. It is observed that prices are almost always in the range of $4 to $5 per loaf, and that rarely are there bread shortages. This outcome is a result of the self-organisation of the free-market, and economists would say the price range and its stability has an endogenous cause. In contrast, if the government legislated the price range and the production levels that would be an exogenous cause. Friedrich Hayek emphasised the role of spontaneous order in economics. In biology, Stuart Kaufmann equates emergence with spontaneous order and self-organisation.
In physics, the periodicity of the arrangement of atoms in a crystal is a result of self-organisation and has an endogenous cause. In contrast, the periodicity of atoms in an optical lattice is determined by the laser physicist who creates the lattice and so has an exogenous cause.
Self-organisation shows how local interactions can produce global properties. In different words, short-range interactions can lead to long-range order. After decades of debate and study, the Ising model showed that this was possible. Other examples of self-organisation, include flocking of birds and teamwork in ant colonies. There is no director or leader but the system acts “as if” there is.
7. Unpredictability
Ernst Mayr (This is Biology, p.19) defines emergence as “in a structured system, new properties emerge at higher levels of integration that could not have been predicted from a knowledge of the lower-level components.” Philip Ball also defines emergence in terms of unpredictability (Quanta, 2024).
More broadly, in discussions of emergence, “prediction” is used in three different senses: logical prediction, historical prediction, and dynamical prediction.
Logical prediction (deduction) concerns whether one can predict (calculate) the emergent (novel) property of the whole system solely from a knowledge of all the properties of the parts of the system and their interactions. Logical predictability is one of the most contested characteristics of emergence. Sometimes “predict” is replaced with “difficult to predict”, “extremely difficult to predict”, “impossible to predict”, “almost impossible to predict”, or “possible in principle, but impossible in practice, to predict.”
As an aside, I note that philosophers distinguish between epistemological emergence and ontological emergence. They are associated with prediction that is "possible in principle, but difficult in practice" and "impossible in principle" respectively.
After an emergent property has been discovered experimentally sometimes it can be understood in terms of the properties of the system parts. In a sense “pre-diction” then becomes “post-diction.” An example is the BCS theory of superconductivity, which provided a posteriori, rather than a priori, understanding. In different words, development of the theory was guided by a knowledge of the phenomena that had already been observed and characterised experimentally. Thus, a keyword in the statement above about logical prediction is “solely”.
Historical prediction. Most new states of matter discovered by experimentalists were not predicted even though theorists knew the laws that the microscopic components of the system obeyed. Examples include superconductivity (elemental metals, cuprates, iron pnictides, organic charge transfer salts, …), superfluidity in liquid 4He, antiferromagnetism, quasicrystals, and the integer and fractional quantum Hall states.
There are a few exceptions where theorists did predict new states of matter. These include are Bose-Einstein Condensates (BECs) in dilute atomic gases and topological insulators, the Anderson insulator in disordered metals, the Haldane phase in even-integer quantum antiferromagnetic spin chains, and the hexatic phase in two dimensions. It should be noted that prediction of BECs and topological insulators were significantly helped that theorists could predict them starting with Hamiltonians of non-interacting particles. Furthermore, all of these predictions involved working with effective Hamiltonians. None started with microscopic Hamiltonians for specific materials.
Dynamical unpredictability concerns what it means in chaotic dynamical systems, where it relates to sensitivity to initial conditions. I do not see this as an example of emergence as it can occur in systems with only a few degrees of freedom. However, some authors do associate dynamical unpredictability with complexity and emergence.
8. Irreducibility and singularities
An emergent property cannot be reduced to properties of the parts, because if emergence is defined in terms of novelty, the parts do not have the property.
Emergence is also associated with the problem of theory reduction. Formally, this is the process where a more general theory reduces in a particular mathematical limit to a less general theory. For example, quantum mechanics reduces to classical mechanics in the limit where Planck’s constant goes to zero. Einstein’s theory of special relativity reduces to Newtonian mechanics in the limit where the speeds of massive objects become much less than the speed of light. Theory reduction is a subtle philosophical problem that is arguably poorly understood both by scientists [who oversimplify or trivialise it] and philosophers [who arguably overstate the problems it presents for science producing reliable knowledge]. Subtleties arise because the two different theories usually involve language and concepts that are "incommensurate" with one another.
Irreducibility is also related to the discontinuities and singularities associated with emergent phenomena. As emphasised independently by Hans Primas and Michael Berry, singularities occur because the mathematics of theory reduction involves singular asymptotic expansions. Primas illustrates this by considering a light wave incident on an object and producing a shadow. The shadow is an emergent property, well described by geometrical optics, but not by the more fundamental theory of Maxwell’s electromagnetism. The two theories are related in the asymptotic limit that the wavelength of light in Maxwell’s theory tends to zero. This example illustrates that theory reduction is compatible with the emergence of novelty. Primas also considers how the Born-Oppenheimer approximation, which is central to solid state theory and quantum chemistry, is associated with a singular asymptotic expansion (in the ratio of the mass of an electron to the mass of an atomic nuclei in the system).
Berry considers several other examples of theory reduction, including going from general to special relativity, from statistical mechanics to thermodynamics, and from viscous (Navier-Stokes) fluid dynamics to inviscid (Euler) fluid dynamics. He has discussed in detail how the caustics that occur in ray optics are an emergent phenomena and are associated with singular asymptotic expansions in the wave theory.
The philosopher of science Jeremy Butterfield showed rigorously that theory reduction occurred for four specific systems that exhibited emergence, defined by him as a novel and robust property. Thus, novelty is not sufficient for irreducibility.
9. Contextuality and downward causation
Any real system has a context. For example, it has boundary and an environment, both in time and space. In many cases the properties of the system are completely determined by the parts of the system and their interactions. Previous history and boundaries do not matter. However, in some cases the context may have a significant influence on the state of the system. An example is Rayleigh-Bernard convection cells and turbulent flow whose existence and nature are determined by the interaction of the fluid with the container boundaries. A biological example concerns what factors determine the structure, properties, and function that a particular protein (linear chain of amino acids) has. It is now known that the only factor is not just the DNA sequence that encodes for the amino acid sequence, in contradiction to some versions of the Central Dogma of molecular biology. Other factors may be the type of cell that contains the protein and the network of other proteins in which the particular protein is embedded. Context sometimes matters.
Supervenience is the idea that once the micro level is fixed, macro levels are fixed too. The examples above might be interpreted as evidence against supervenience. Supervenience is used to argue against “the possibility for mental causation above and beyond physical causation.”
Downward causation is sometimes equated with emergence, particularly in debates about the nature of consciousness. In the context of biology, Denis Noble defines downward causation as when higher level processes can cause changes in lower level properties and processes. He gives examples where physiological effects can switch on and off individual genes or signalling processes in cells, including maternal effects and epigenetics.
10. Intra-stratum closure: informational, causal, and computational
The ideas described below were recently developed by Rosas et al. from a computer science perspective. They defined emergence in terms of universality and discussed its relationship to informational closure, causal closure, and computational closure. Each of these are given a precise technical definition in their paper. Here I give the sense of their definitions. In considering a general system they do not pre-define the micro- and macro- levels of a system but consider how they might be defined so that universality holds, i.e., so that properties at the macro-level are independent of the details of the micro-level (i.e., are universal).
Informational closure means that to predict the dynamics of the system at the macroscale an observer does not need any additional information about the details of the system at the microscale. Equilibrium thermodynamics and fluid dynamics are examples.
Causal closure means that the system can be controlled at the macroscale without any knowledge of lower-level information. For example, changing the software code that is running on a computer allows one to reliably control the microstate of the hardware of the computer regardless of what is happening with the trajectories of electrons in the computer.
Computational closure is a more technical concept, being defined in terms of “a conceptual device called the ε-(epsilon) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one... for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, “strongly lumpable.” "
Rosas et al., show that informational closure and causal closure are equivalent and that they are more restrictive than computational closure. It is not clear to me how these closures relate to novelty as a definition of emergence.
Emergence and protein folding
Proteins are a distinct state of matter. Globular proteins are tightly packed with a density comparable to a crystal but without the spatia...
-
Is it something to do with breakdown of the Born-Oppenheimer approximation? In molecular spectroscopy you occasionally hear this term thro...
-
If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new mat...
-
I welcome discussion on this point. I don't think it is as sensitive or as important a topic as the author order on papers. With rega...