Wednesday, October 30, 2024

A very effective Hamiltonian in nuclear physics

Atomic nuclei are complex quantum many-body systems. Effective theories have helped provide a better understanding of them. The best-known are the shell model, the (Aage) Bohr-Mottelson theory of non-spherical nuclei, and the liquid drop model. Here I introduce the Interacting Boson Model (IBM), which provides somewhat of a microscopic basis for the Bohr-Mottelson theory. Other effective theories in nuclear physics are chiral perturbation theory, Weinberg's theory for nucleon-pion interactions, and Wigner's random matrix theory.

The shell model has similarities to microscopic models in atomic physics. A major achievement is it explains the origins of magic numbers, i.e., nuclei with atomic numbers 2, 8, 20, 28, 50, 82, and 126 are particularly stable because they have closed shells. Other nuclei can then be described theoretically as an inert closed shell plus valence nucleons that interact with a mean-field potential due to the core nuclei and then with one another via effective interactions.

For medium to heavy nuclei the Bohr-Mottelson model describes collective excitations including transitions in the shape of nuclei.

An example of the trends in the low-lying excitation spectrum  to explain is shown in the figure below. The left spectrum is for nucleus with close to a magic number of nuclei and the right one for an almost half-filled shell. R_4/2 is the ratio of the energies of the J=4+ state to that of the 2+ state, relative to the ground state. B(E2) is the strength of the quadrupole transition between the 2+ state and the ground state.


The Interacting Boson Model (IBM) is surprisingly simple and successful. It illustrates the importance of quasi-particles, builds on the stability of closed shells, and neglects many degrees of freedom. It describes even-even nuclei, i.e., nuclei with an even number of protons and an even number of neutrons. The basic entities in the theory are pairs of nucleons. These are taken to be either an s-wave state or a d-wave state. There are five d-wave states (corresponding to the 2J+1 possible states of total angular momentum with J=2). Each state is represented by a boson creation operator and so the Hilbert space is six-dimensional. If the states are degenerate [which they are not] the model has U(6) symmetry.

The IBM Hamiltonian is written in terms of the most general possible combinations of the boson operators. This has a surprisingly simple form.

Note that it involves only four parameters. For a given nucleus these parameters can be fixed from experiment, and in principle calculated from the shell model. The Hamiltonian can be written in a form that gives physical insight, connects to the Bohr-Mottelson model and is amenable to a group theoretical analysis that makes calculation and understanding of the energy spectrum relatively simple.

Central to the group theoretical analysis is considering subalgebra chains as shown below

 

An example of an energy spectrum is shown below.

The fuzzy figures are taken from a helpful Physics Today article by Casten and Feng from 1984 (Aside: the article discusses an extension of the IBM involving supersymmetry, but I don't think that has been particularly fruitful).

The figure below connects the different parameter regimes of the model to the different subalgebra chains.


The nucleotide chart below has entries that have colour shading corresponding to their parameter values for the IBM model according to the symmetry triangle above.

The different vertices of the triangle correspond to different nuclear geometries and allow a connection to Aage Bohr's model for the surface excitations. 

This is discussed in a nice review article, which includes the figure above.

Quantum phase transitions in shape of nuclei

Pavel Cejnar, Jan Jolie, and Richard F. Casten

Aside: one thing that is not clear to me from the article concerns questions that arise because the nucleus has a finite number of degrees of freedom. Are the symmetries actually broken or is there tunneling between degenerate ground states?   

Tuesday, October 22, 2024

Colloquium on 2024 Nobel Prizes


This friday I am giving a colloquium for the UQ Physics department.

2024 Nobel Prizes in Physics and Chemistry: from biological physics to artificial intelligence and back

The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Half of the 2024 Chemistry prize was awarded to Dennis Hassabis and John Jumper for “protein structure prediction” using artificial intelligence. I will describe the physics background needed to appreciate the significance of the awardees work. 

Hopfield proposed a simple theoretical model for how networks of neurons in a brain can store and recall memories. Hopfield drew on his background in and ideas from condensed matter physics, including the theory of spin glasses, the subject of the 2021 Physics Nobel Prize.

Hinton, a computer scientist, generalised Hopfield’s model, using ideas from statistical physics to propose a “Boltzmann machine” that used an artificial neural network to learn to identify patterns in data, by being trained on a finite set of examples. 

For fifty years scientists have struggled with the following challenge in biochemistry: given the unique sequence of amino acids that make up a particular protein can the native structure of the protein be predicted? Hassabis, a computer scientist, and Jumper, a theoretical chemist, used AI methods to solve this problem, highlighting the power of AI in scientific research. 

I will briefly consider some issues these awards raise, including the blurring of boundaries between scientific disciplines, tensions between public and corporate interests, research driven by curiosity versus technological advance, and the limits of AI in scientific research.

Here is my current draft of the slides.

Saturday, October 19, 2024

John Hopfield on what physics is

A decade ago John Hopfield reflected on his scientific life in Annual Reviews in Condensed Matter Physics, Whatever Happened to Solid State Physics?

"What is physics? To me—growing up with a father and mother who were both physicists—physics was not subject matter. The atom, the troposphere, the nucleus, a piece of glass, the washing machine, my bicycle, the phonograph, a magnet—these were all incidentally the subject matter. The central idea was that the world is understandable, that you should be able to take anything apart, understand the relationships between its constituents, do experiments, and on that basis be able to develop a quantitative understanding of its behavior. 

Physics was a point of view that the world around us is, with effort, ingenuity, and adequate resources, understandable in a predictive and reasonably quantitative fashion. Being a physicist is a dedication to the quest for this kind of understanding."

He describes how this view was worked out in his work in solid state theory and moved into biological physics and the paper for which he was awarded the Nobel Prize. 

"Eventually, my knowledge of spin-glass lore (thanks to a lifetime of interaction with P.W. Anderson), Caltech chemistry computing facilities, and a little neurobiology led to the first paper in which I used the word neuron. It was to provide an entryway to working on neuroscience for many physicists..."

After he started working on biological physics in the late 1970s he got an offer from Chemistry and Biology at Caltech and Princeton Physics suggested he take it. 

"In 1997, I returned to Princeton—in the Molecular Biology Department, which was interested in expanding into neurobiology. Although no one in that department thought of me as anything but a physicist, there was a grudging realization that biology could use an infusion of physics attitudes and viewpoints. I had by then strayed too far from conventional physics to be courted for a position in any physics department. So I was quite astonished in 2003 to be asked by the American Physical Society to be a candidate for vice president. And, I was very happy to be elected and ultimately to serve as the APS president. I had consistently felt that the research I was doing was entirely in the spirit and paradigms of physics, even when disowned by university physics departments."

Saturday, October 12, 2024

2024 Nobel Prize in Physics

 I was happy to see John Hopfield was awarded the Nobel Prize in Physics for his work on neural networks. The award is based on this paper from 1982

Neural networks and physical systems with emergent collective computational abilities

One thing I find beautiful about the paper is how Hopfield drew on ideas about spin glasses (many competing interactions lead to many ground states and a complex energy landscape).

A central insight is that an efficient way to store the information describing multiple objects (different collective spin states in an Ising model) is in terms of the inter-spin interaction constants (J_ij's) in the Ising model. These are the "weights" that are trained/learned in computer neural nets.

It should be noted that Hopfield's motivation was not at all to contribute to computer science. It was to understand a problem in biological physics: what is the physical basis for associative memory? 

I have mixed feelings about Geoffrey Hinton sharing the prize.  On the one hand, in his initial work, Hinton used physics ideas (Boltzmann weights) to extend Hopfields ideas so they were useful in computer science. Basically, Hopfield considered a spin glass model at zero temperature and Hinton considered it at non-zero temperature. [Note, the temperature is not physical it is just a parameter in a Boltzmann probability distribution for different states of the neural network]. Hinton certainly deserves lots of prizes, but I am not sure a physics one is appropriate. His work on AI has certainly been helpful for physics research. But so have lots of other advances in computer software and hardware, and those pioneers did not receive a prize.

I feel a bit like I did with Jack Kilby getting a physics prize for his work on integrated circuits. I feel that sometimes the Nobel Committee just wants to remind the world how physics is so relevant to modern technology.

Ten years ago Hopfield wrote a nice scientific autobiography for Annual Reviews in Condensed Matter Physics,

Whatever Happened to Solid State Physics?

After the 2021 Physics Nobel to Parisi, I reflected on the legacy of spin glasses, including the work of Hopfield.

Aside: I once pondered whether a chemist will ever win the Physics prize, given that many condensed matter physicists have won the chemistry prize. Well now, we have had an electronic engineer and a computer scientist winning the Physics prize.

Another side: I think calling Hinton's network a Boltzmann machine is a scientific misnomer. I should add this to my list of people getting credit for things that did not do. Boltzmann never considered networks, spin glasses or computer algorithms. Boltzmann was a genius, but I don't think we should be attaching his name to everything that involves a Boltzmann distribution. To me, this is a bit like calling the Metropolis algorithm for Monte Carlo simulations the Boltzmann algorithm. 

Monday, October 7, 2024

Mental Health for Academics

Tomorrow I am giving a talk "Mental health for academics" for the ARC Centre for Engineered Quantum Systems as part of Mental Health Week.

Here is a video recording of my planned talk. As an experiment, I did a record practice versions of my talked and uploaded it on YouTube. Feedback both on content and the technology welcome.

Here are the slides.

A resource I mention at the end is the blog Voices of Academia, set up by Marissa Edwards from UQ.

Thursday, September 26, 2024

The multi-faceted character of emergence (part 2)

In the previous post, I considered five different characteristics that are often associated with emergence and classified them as being associated with ontology (what is real and observable) rather than epistemology (what we believe to be true). 

Below I consider five more characteristics: self-organisation, unpredictability, irreducibility, contextuality and downward causation, and intra-stratum closure.

6. Self-organisation

Self-organisation is not a property of the system but a mechanism that a theorist says causes an emergent property to come into being. Self-organisation is also referred to as spontaneous order. 

In the social sciences self-organisation is sometimes referred to as an endogenous cause, in contrast to an exogenous cause. There is no external force or agent causing the order, in contrast to order that is imposed externally. For example, suppose that in a city there is no government policy about the price of a loaf of sliced wholemeal bread or on how many loaves that bakers should produce. It is observed that prices are almost always in the range of $4 to $5 per loaf, and that rarely are there bread shortages. This outcome is a result of the self-organisation of the free-market, and economists would say the price range and its stability has an endogenous cause. In contrast, if the government legislated the price range and the production levels that would be an exogenous cause. Friedrich Hayek emphasised the role of spontaneous order in economics. In biology, Stuart Kaufmann equates emergence with spontaneous order and self-organisation.

In physics, the periodicity of the arrangement of atoms in a crystal is a result of self-organisation and has an endogenous cause. In contrast, the periodicity of atoms in an optical lattice is determined by the laser physicist who creates the lattice and so has an exogenous cause.

Self-organisation shows how local interactions can produce global properties. In different words, short-range interactions can lead to long-range order. After decades of debate and study, the Ising model showed that this was possible. Other examples of self-organisation, include flocking of birds and teamwork in ant colonies. There is no director or leader but the system acts “as if” there is. 

7. Unpredictability

Ernst Mayr (This is Biology, p.19) defines emergence as “in a structured system, new properties emerge at higher levels of integration that could not have been predicted from a knowledge of the lower-level components.” Philip Ball also defines emergence in terms of unpredictability (Quanta, 2024).

More broadly, in discussions of emergence, “prediction” is used in three different senses: logical prediction, historical prediction, and dynamical prediction.

Logical prediction (deduction) concerns whether one can predict (calculate) the emergent (novel) property of the whole system solely from a knowledge of all the properties of the parts of the system and their interactions. Logical predictability is one of the most contested characteristics of emergence. Sometimes “predict” is replaced with “difficult to predict”, “extremely difficult to predict”, “impossible to predict”, “almost impossible to predict”, or “possible in principle, but impossible in practice, to predict.” 

As an aside, I note that philosophers distinguish between epistemological emergence and ontological emergence. They are associated with prediction that is "possible in principle, but difficult in practice" and "impossible in principle" respectively.

After an emergent property has been discovered experimentally sometimes it can be understood in terms of the properties of the system parts. In a sense “pre-diction” then becomes “post-diction.” An example is the BCS theory of superconductivity, which provided a posteriori, rather than a priori, understanding. In different words, development of the theory was guided by a knowledge of the phenomena that had already been observed and characterised experimentally. Thus, a keyword in the statement above about logical prediction is “solely”. 

Historical prediction. Most new states of matter discovered by experimentalists were not predicted even though theorists knew the laws that the microscopic components of the system obeyed. Examples include superconductivity (elemental metals, cuprates, iron pnictides, organic charge transfer salts, …), superfluidity in liquid 4He, antiferromagnetism, quasicrystals, and the integer and fractional quantum Hall states.

There are a few exceptions where theorists did predict new states of matter. These include are Bose-Einstein Condensates (BECs) in dilute atomic gases and topological insulators, the Anderson insulator in disordered metals, the Haldane phase in even-integer quantum antiferromagnetic spin chains, and the hexatic phase in two dimensions. It should be noted that prediction of BECs and topological insulators were significantly helped that theorists could predict them starting with Hamiltonians of non-interacting particles. Furthermore, all of these predictions involved working with effective Hamiltonians. None started with microscopic Hamiltonians for specific materials.

Dynamical unpredictability concerns what it means in chaotic dynamical systems, where it relates to sensitivity to initial conditions. I do not see this as an example of emergence as it can occur in systems with only a few degrees of freedom. However, some authors do associate dynamical unpredictability with complexity and emergence.

8. Irreducibility and singularities

An emergent property cannot be reduced to properties of the parts, because if emergence is defined in terms of novelty, the parts do not have the property. 

Emergence is also associated with the problem of theory reduction. Formally, this is the process where a more general theory reduces in a particular mathematical limit to a less general theory. For example, quantum mechanics reduces to classical mechanics in the limit where Planck’s constant goes to zero. Einstein’s theory of special relativity reduces to Newtonian mechanics in the limit where the speeds of massive objects become much less than the speed of light. Theory reduction is a subtle philosophical problem that is arguably poorly understood both by scientists [who oversimplify or trivialise it] and philosophers [who arguably overstate the problems it presents for science producing reliable knowledge]. Subtleties arise because the two different theories usually involve language and concepts that are "incommensurate" with one another. 

Irreducibility is also related to the discontinuities and singularities associated with emergent phenomena. As emphasised independently by Hans Primas and Michael Berry, singularities occur because the mathematics of theory reduction involves singular asymptotic expansions. Primas illustrates this by considering a light wave incident on an object and producing a shadow. The shadow is an emergent property, well described by geometrical optics, but not by the more fundamental theory of Maxwell’s electromagnetism. The two theories are related in the asymptotic limit that the wavelength of light in Maxwell’s theory tends to zero. This example illustrates that theory reduction is compatible with the emergence of novelty. Primas also considers how the Born-Oppenheimer approximation, which is central to solid state theory and quantum chemistry, is associated with a singular asymptotic expansion (in the ratio of the mass of an electron to the mass of an atomic nuclei in the system). 

Berry considers several other examples of theory reduction, including going from general to special relativity, from statistical mechanics to thermodynamics, and from viscous (Navier-Stokes) fluid dynamics to inviscid (Euler) fluid dynamics. He has discussed in detail how the caustics that occur in ray optics are an emergent phenomena and are associated with singular asymptotic expansions in the wave theory.

The philosopher of science Jeremy Butterfield showed rigorously that theory reduction occurred for four specific systems that exhibited emergence, defined by him as a novel and robust property. Thus, novelty is not sufficient for irreducibility.

9. Contextuality and downward causation

Any real system has a context. For example, it has boundary and an environment, both in time and space. In many cases the properties of the system are completely determined by the parts of the system and their interactions. Previous history and boundaries do not matter. However, in some cases the context may have a significant influence on the state of the system. An example is Rayleigh-Bernard convection cells and turbulent flow whose existence and nature are determined by the interaction of the fluid with the container boundaries. A biological example concerns what factors determine the structure, properties, and function that a particular protein (linear chain of amino acids) has. It is now known that the only factor is not just the DNA sequence that encodes for the amino acid sequence, in contradiction to some versions of the Central Dogma of molecular biology.  Other factors may be the type of cell that contains the protein and the network of other proteins in which the particular protein is embedded. Context sometimes matters.

Supervenience is the idea that once the micro level is fixed, macro levels are fixed too. The examples above might be interpreted as evidence against supervenience. Supervenience is used to argue against “the possibility for mental causation above and beyond physical causation.” 

Downward causation is sometimes equated with emergence, particularly in debates about the nature of consciousness. In the context of biology, Denis Noble defines downward causation as when higher level processes can cause changes in lower level properties and processes. He gives examples where physiological effects can switch on and off individual genes or signalling processes in cells, including maternal effects and epigenetics.

10. Intra-stratum closure: informational, causal, and computational

The ideas described below were recently developed by Rosas et al. from a computer science perspective. They defined emergence in terms of universality and discussed its relationship to informational closure, causal closure, and computational closure. Each of these are given a precise technical definition in their paper. Here I give the sense of their definitions. In considering a general system they do not pre-define the micro- and macro- levels of a system but consider how they might be defined so that universality holds, i.e., so that properties at the macro-level are independent of the details of the micro-level (i.e., are universal).

Informational closure means that to predict the dynamics of the system at the macroscale an observer does not need any additional information about the details of the system at the microscale. Equilibrium thermodynamics and fluid dynamics are examples. 

Causal closure means that the system can be controlled at the macroscale without any knowledge of lower-level information. For example, changing the software code that is running on a computer allows one to reliably control the microstate of the hardware of the computer regardless of what is happening with the trajectories of electrons in the computer.

Computational closure is a more technical concept, being defined in terms of “a conceptual device called the ε-(epsilon) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one... for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, “strongly lumpable.” "

Rosas et al., show that informational closure and causal closure are equivalent and that they are more restrictive than computational closure. It is not clear to me how these closures relate to novelty as a definition of emergence.

In summary, emergence means different things to different people. I have listed ten different characteristics that have been associated with emergent properties. They are not all equivalent and so when discussing emergence it is important to be clear about which characteristic one is using to define emergence.

Tuesday, September 24, 2024

The multi-faceted character of emergence (part 1)

There is more to emergence than novel properties, i.e., where a whole system has a property that the individual components of the system do not have. Here I focus on emergent properties, but in most cases “property” might be replaced with state, phenomenon, or entity. I now discuss ten characteristics often associated with emergence, beyond novelty. Some people include one or more of these characteristics in their definitions of emergence. However, I do not include them in my definition because as I explain some of the characteristics are contentious. Some may not be necessary or sufficient for novel system properties.

The first five characteristics discussed below might be classified as objective (i.e., observable properties of the system) and the second five as subjective (i.e., associated with how an investigator thinks about the system). In different words, the first five are mostly concerned with ontology (what is real) and the second five with epistemology (what we know). The first five characteristics concern discontinuities, universality, diversity, mesoscales, and modification of parts. The second five concern self-organisation, unpredictability, irreducibility, downward causation, and closure. 

1. Discontinuities 

Quantitative changes in the system can become qualitative changes in the system. For example, in condensed matter physics spontaneous symmetry breaking only occurs in the thermodynamic limit (i.e., when the number of particles of the system becomes infinite). More is different. Thus, as a quantitative change in the system size occurs the order parameter becomes non-zero. In a system that undergoes a phase transition at a non-zero temperature, a small change in temperature can lead to the appearance of order and to a new state of matter. For a first-order phase transition, there is discontinuity in properties such as the entropy and density. These discontinuities define a phase boundary in the pressure-temperature diagram. For continuous phase transitions the order parameter is a continuous function of temperature, becoming non-zero at the critical temperature. However the derivative with respect to temperature may be discontinuous and/or thermodynamic properties such as the specific heat and susceptibility associated with the order parameter may approach infinite as the critical temperature is approached.

Two different states of a system are said to be adiabatically connected if one can smoothly deform one state into the other and all the properties of the system also change smoothly. The case of the liquid-gas transition illustrates subtle issues about defining emergence. A discontinuity does not imply a qualitative difference (novelty). On the one hand, there is a discontinuity in the density and entropy of the system as the liquid-gas phase boundary is crossed in the pressure-temperature diagram. On the other hand, there is no qualitative difference between a gas and a liquid. There is only a quantitative difference: the density of the gas is less than the liquid. Albeit sometimes the difference is orders of magnitude. The liquid and gas state can be adiabatically connected. There is a path in the pressure-temperature phase diagram that can be followed to connect the liquid and gas states without any discontinuities in properties.

The ferromagnetic state also raises questions, as illustrated by a debate between Rudolf Peierls and Phil Anderson about whether ferromagnetism exhibits spontaneous symmetry breaking. Anderson argued that it did not as, in contrast to the antiferromagnetic state, a non-zero magnetisation (order parameter) occurs for finite systems and the magnetic order does not change the excitation spectrum, i.e., produce a Goldstone boson. On the other hand, singularities in properties at the Curie temperature (critical temperature for ferromagnetism) only exist in the thermodynamic limit. Also, a small change in the temperature, from just above the Curie temperature to below, can produce a qualitative change, a non-zero magnetisation.

2. Universality

Properties often referred to as emergent are universal in the sense that it is independent of many of the details of the parts of the system. There may be many different systems that can have a particular emergent property. For example, superconductivity is present in metals with a diverse range of crystal structures and chemical compositions. 

Robustness is related to universality. If small changes are made to the composition of the system (for example replacing some of the atoms in the system with atoms of different chemical element) the novel property of the system is still present. In elementary superconductors, introducing non-magnetic impurity atoms has no effect on the superconductivity.

Universality is both a blessing and a curse for theory. Universality can make it easier to develop successful theories because it means that many details need not be included in a theory in order for it to successfully describe an emergent phenomenon. This is why effective theories and toy models can work even better than might be expected. Universality can make theories more powerful because they can describe a wider range of systems. For example, properties of elemental superconductors can be described by BCS theory and by Ginzburg-Landau theory, even though the materials are chemically and structurally diverse. The curse of universality for theory is that universality illustrates the problem of “under-determination of theory”, “over-fitting of data” and “sloppy theories” [Sethna et al.]. A theory can agree with the experiment even when the parameters used in the theory may be quite different from the actual ones. For example, the observed phase diagram of water can be reproduced, sometimes with impressive quantitative detail, by combining classical statistical mechanics with empirical force fields that assume water molecules can be treated purely being composed of point charges.

Suppose we start with a specific microscopic theory and calculate the macroscopic properties of the system, and they agree with experiment. It would then be tempting to think that we have the correct microscopic theory. However, universality suggests this may not be the case.

For example, consider the case of a gas of weakly interacting atoms or molecules. We can treat the gas particles as classical or quantum. Statistical mechanics gives exactly the same equation of state and specific heat capacity for both microscopic descriptions. The only difference may be the Gibbs paradox [the calculated entropy is not an extensive quantity] which is sensitive to whether or not the particles are treated as identical or not. Unlike the zeroth, first, and second law of thermodynamics, the third law does require that the microscopic theory be quantum. Laughlin discusses these issues in terms of “protectorates” that hide “ultimate causes” .  

In some physical systems, universality can be defined in a rigorous technical sense, making use of the concepts and techniques of the renormalisation group and scaling. These techniques provide a method to perform coarse graining, to derive effective theories and effective interactions, and to define universality classes of systems. There are also questions of how universality is related to the robustness of strata, and the independence of effective theories from the coarse-graining procedure.

3. Diversity

Even when a system is composed of a small number of different components and interactions, the large number of possible stable states with qualitatively different properties that the system can have is amazing. Every snowflake is different. Water is found in 18 distinct solid states. All proteins are composed of linear chains of 20 different amino acids. Yet in the human body there are more than 100,000 different proteins and all perform specific biochemical functions. We encounter an incredible diversity of human personalities, cultures, and languages. A stunning case of diversity is life on earth. Billions of different plant and animal species are all an expression of different linear combinations of the four base pairs of DNA: A, G, T, and C.

This diversity is related to the idea that "simple models can describe complex behaviour". One example is Conway’s Game of Life. Another example is how simple Ising models with a few competing interactions can describe a devil's staircase of ground states or the multitude of different atomic orderings found in binary alloys.

Goldenfeld and Kadanoff defined complexity [emergence] as “structure with variations”. Holland (VSI) discusses “perpetual novelty” giving the example of the game of chess, where are typical game may involve the order of 1050 move sequences. “Motifs” are recurring patterns (sequences of moves) in games. 

Condensed matter physics illustrates diversity with the many different states of matter that have been discovered. The underlying microscopics is “just” electrons and atomic nuclei interacting according to Coulomb’s law.

The significance of this diversity might be downplayed by saying that it is just a result of combinatorics. But such a claim overlooks the issue of the stability of the diverse states that are observed. In a system composed of many components each of which can take on a few states the number of possible states of the whole system grows exponentially with the number of components. For example, for a chain of ten amino acids there are 1013 different possible linear sequences. But this does not mean that all these sequences will produce a functional protein, i.e., a molecule that will fold rapidly (on the timescale of milliseconds) into a stable tertiary structure and perform a useful biochemical function such as catalysis of a specific chemical reaction or signal transduction.

4. Simple entities at the mesoscale 

A key idea in condensed matter physics is that of quasi-particles. A system of strongly interacting particles may have excitations, seen in experiments such as inelastic neutron scattering and Angle Resolved PhotoElectron Spectroscopy (ARPES), that can be described as weakly interacting quasi-particles. These entities are composite particles, and have properties that are quantitatively different, and sometimes qualitatively different, from the microscopic particles. Sometimes this means that the scale (size) associated with the quasi-particles is intermediate between the micro- and the macro-scales, i.e., it is a mesoscale. The existence of quasi-particles leads naturally to the technique of constructing an effective Hamiltonian [effective theory] for the system where effective interactions describe the interactions between the quasi-particles.

The economist Herbert Simon argued that a characteristic of a complex system is that the system can be understood in terms of nearly decomposable units. Rosas et al., argue that emergence is associated with there being a scale at which the system is “strongly lumpable”. Denis Noble has highlighted how biological systems are modular, i.e., composed of simple interchangeable components.

5. Modification of parts and their relationships

Emergent properties are often associated with the state of the system exhibiting patterns, order, or structure, terms that may be used interchangeably. This reflects that there is a particular relationship (correlation) between the parts which is different to the relationships in a state without the emergent property. This relationship may also be reflected in a generalised rigidity. For example, in a solid applying a force on one surface results in all the atoms in the solid experiencing a force and moving together. The rigidity of the solid defines a particular relationship between the parts of the system.

Properties of the individual parts may also be different. For example, in a crystal single-atom properties such as electronic energy levels change quantitatively compared to their values for isolated atoms. Properties of finite subsystems are also modified, reflecting a change in interactions between the parts. For example, in a molecular crystal the frequencies associated with intramolecular atomic vibrations are different to their values for isolated molecules. However, emergence is a sufficient but not a necessary condition for these modifications. In gas and liquid states, novelty is not present but there are still such changes in the properties of the individual parts.

As stated at the beginning of this section the five characteristics above might be associated with ontology (what is real) and objective properties of the system that an investigator observes and depend less on what an observer thinks about the system. The next five characteristics might be considered to be more subjective, being concerned with epistemology (how we determine what is true). In making this dichotomy I do not want to gloss over the fuzziness of the distinction or of two thousand years of philosophical debates about the relationship between ontology and epistemology, or between reality and theory.

In the next post, I will discuss the remaining five characteristics: self-organisation, unpredictability, irreducibility, contextuality and downward causation, and intra-stratum closure.

Thanks for reading this far!

A very effective Hamiltonian in nuclear physics

Atomic nuclei are complex quantum many-body systems. Effective theories have helped provide a better understanding of them. The best-known a...