Wednesday, October 8, 2025

2025 Nobel Prize in Physics: Macroscopic quantum effects

John Clarke, Michel H. Devoret, and John M. Martinis received the prize  “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit.”

The work was published in three papers in PRL in 1984 and 1985. The New York Times has a nice discussion of the award, including comments from Clarke, Martinis, Tony Leggett, and Steve Girvin.

There is some rich, subtle, and beautiful physics here. As a theorist, I comment on the conceptual and theoretical side, but don't want to minimise that doing the experiments was a technical breakthrough.

The experiments were directly stimulated by Tony Leggett, who, beginning in the late 70s, championed the idea that Josephson junctions and SQIDs could be used to test whether quantum mechanics was valid at the macroscopic level. Many in the quantum foundations community were sceptical. Leggett and Amir Caldeira, performed some beautiful, concrete, realistic calculations of the effect of decoherence and dissipation on quantum tunneling in SQUIDs. The results suggested that macroscopic tunneling should be observable.

Aside: Leggett rightly received a Nobel in 2003 for his work on the theory of superfluid 3He. Nevertheless, I believe his work on quantum foundations is even more significant.

Subtle point 1. What do we mean by a macroscopic quantum state?

It is commonly said that superconductors and superfluids are in a macroscopic quantum state. Signatures are the quantisation of magnetic flux in a superconducting cylinder and how the current through a Josephson junction oscillates as a function of the magnetic flux through the junction. I discuss this in the chapter on Quantum Matter in my Very Short Introduction.

Leggett argued that these experiments are explained by the Josephson equations, which treat the phase of the superconducting order parameter as a classical variable. For example, in a SQUID, it satisfies a classical dynamical equation. 

If the state is truly quantum, then the phase variable should be quantised.

Aside: a nice microscopic derivation, starting from BCS theory and using path integrals, of the effective action to describe the quantum dynamics was given in 1982 by Vinay Ambegaokar, Ulrich Eckern, Gerd Schön

Subtle point 2. There are different signatures of quantum theory: energy level quantisation, tunnelling, coherence (interference), and entanglement.

In 1984-5, Clarke, DeVoret, and Martinis observed the first two. Macroscopic quantum coherence is harder to detect and was only observed in 2000. 

In a nice autobiographical article
Leggett commented in 2020,
Because of the strong prejudice in the quantum foundations community that it would never be possible to demonstrate characteristically quantum-mechanical effects at the macroscopic level, this assertion made us [Leggett and Garg, 1985] the target of repeated critical comments over the next few years. Fortunately, our experimental colleagues were more open-minded, and several groups started working toward a meaningful experiment along the lines we had suggested, resulting in the first demonstrations (29, 30) of MQC [Macroscopic Quantum Coherence] in rf SQUIDs (by then rechristened flux qubits) at the turn of the century. However, it would not be until 2016 that an experiment along the lines we had suggested (actually using a rather simpler protocol than our original one) was carried out (31) and, to my mind, definitively refuted macrorealism at that level.  
I find it rather amusing that nowadays the younger generation of experimentalists in the superconducting qubit area blithely writes papers with words like “artificial atom” in their titles, apparently unconscious of how controversial that claim once was.

Two final comments on the sociology side.

Superconductivity and superfluidity have now been the basis for Nobel Prizes in six years and four years, respectively.

The most widely cited of the three PRLs that were the basis of the Prize is the one on quantum tunnelling with about 500 citations on Google Scholar. (In contrast, Devoret has more than 20 other papers that are more widely cited). From 1986 to 1992 it was cited about a dozen times per year. Between 1993 and 2001 is was only cited a total of 30 times. Since, 2001 is has been cited about 20 times per year.

This is just one more example of how citation rates are a poor measure of the significance of work and a predictor of future success.

Monday, October 6, 2025

Nobel Prize predictions for 2025

 This week Nobel Prizes will be announced. I have not done predictions since 2020. This is a fun exercise. It is also good to reflect on what has been achieved, including outside our own areas, and big advances from the past we may now take for granted.

Before writing this I looked at suggestions from readers of Doug Natelson's blog, nanoscale views, an article in Physics World, predictions from Clarivate based on citations, and recent recipients of the Wolf Prize.

Please enter you own predictions below.

Although we know little about how the process actually works or the explicit criteria used, I have a few speculative suggestions and observations.

1. The Wolf Prize is often a precursor.

2. Every now and then, they seem to surprise us.

3. Every few years, the physics committee seems to go for something technological, sometimes arguably outside physics, perhaps to remind people how important physics is to modern technology and other areas of science.

4. They seem to spread the awards around between different areas of physics.

5. Theory only gets awards when it has led to well-established experimental observations. Brilliant theoretical discoveries that motivate large research enterprises (more theory and experimental searches) are good enough. This is why predictions based on citation numbers may be misleading.

6. Once an award has been made on one topic, it is unlikely that there will be another award for a long time, if ever, on that same topic. In other words, there is a high bar for a second award.

7. I don't think the logic is to pick an important topic and then choose who should get the prize for the topic. This approach works against topics where many researchers independently made contributions that were all important. The awardee needs to be a standout who won't be a debatable choice.

What do you think of these principles?

For some of the above reasons, I discuss below why I am sceptical about some specific predictions.

My top prediction for physics is Metamaterials with negative refractive index, going to John Pendry (theory) and David Smith (experiment). This is a topic I know little about.

Is it just a matter of time before twisted bilayer graphene wins a prize? This might go to Allan MacDonald (theory) and Pablo Jarillo-Herrero (experiment). They recently received a Wolf Prize. One thing that convinced me of the importance of this discovery was a preprint on moiré WSe2 with beautiful phase diagrams such as this one.


The level of control is truly amazing. Helpful background is the recent Physics Today article by Bernevig and Efetov.

This is big enough to overcome 6. and the earlier prize for graphene.

Unfortunately, my past prediction/wish of Kondo and heavy fermions won't happen as Jun Kondo died in 2022. This suggestion also always went against Principle 6, with the award to Ken Wilson citing his solution of the Kondo problem.

The prediction of Berry and Aharonov for topological phases in quantum mechanics is reasonable, except for questions about historical precursors.

The prediction of topological insulators is going against 6. and the award to Haldane in 2016.

Clarivate's predictions of DiVincenzo and Loss (for qubits based on electron spin in quantum dots) goes against 5. and 7. It is just one of many competing proposals for a scaleable quantum computer and a large-scale device is still elusive.

Predictions of a prize for quantum algorithms (Shor, Deutsch, Brassard, Bennett) go against 5. 

Chemistry 

I don't know enough chemistry to make meaningful predictions. On the other hand, in 2019 I did correctly predicted John Goodenough for lithium batteries.  I do like the prediction from Clarivate for Biomolecular condensates (Brangwynne, Hyman, and Rosen). I discussed them briefly in my review article on emergence.

What do you think about my 7 "principles"?

What are your predictions?

Tuesday, September 30, 2025

Elastic frustration in molecular crystals

Crystals of large molecules exhibit diverse structures. In other words, the geometric arrangements of the molecules relative to one another are complex. Given a specific molecule, theoretically predicting its crystal structure is a challenge and is the basis of a competition.

One of the reasons the structures are rich and the theoretical problem is so challenging is that there are typically many different interactions between different molecules, including electrostatic, hydrogen bonding, pi-pi,...

Another challenge is to understand the elastic and plastic properties of the crystals.

Some of my UQ colleagues recently published a paper that highlights some of the complexity.

Origins of elasticity in molecular materials

Amy J. Thompson, Bowie S. K. Chong, Elise P. Kenny, Jack D. Evans, Joshua A. Powell, Mark A. Spackman, John C. McMurtrie, Benjamin J. Powell, and Jack K. Clegg

They used calculations based on Density Functional Theory (DFT) to separate the contributions to the elasticity from the different interactions between the molecules. The figure below shows the three dominant interactions in the family of crystals that they consider.

The figure below shows the energy of interaction between a pair of molecules for the different interactions.
Note the purple vertical bar, which is the value of the coordinate in the equilibrium geometry of the whole crystal. The width of the bar represents variations in both lengths that occur in typical elastic experiments.
What is striking to me is the large difference between the positions of the potential minima for the individual interactions and the minima for the combined interactions.

This is an example of frustration: it is not possible to simultaneously minimise the energy of all the individual pairwise interactions. They are competing with one another.

A toy model illustrates the essential physics. I came up with this model partly motivated by similar physics that occurs in "spin-crossover" materials.


The upper (lower) spring has equilibrium length a (b) and spring constant k (k'). In the harmonic approximation, the total elastic energy is

The equilibrium separation of the two molecules is given by

which is intermediate between a + 2R and b. This illustrates the elastic frustration. Neither of the springs (bonds) is at its optimum length.

The system is stable provided that k + k' is positive. Thus, it is not necessary that both k and k' be positive. The possibility that one of the k's is negative is relevant to reality. Thompson et al. showed that the individual molecular interaction energies are described by Morse potentials. If one is far enough from the minimum of the potential, the local curvature can be negative. 

Monday, September 22, 2025

Turbulent flows in active matter

The power of toy models and effective theories in describing and understanding emergent phenomena is illustrated by a 2012 study of the turbulence in the fluid flow of swimming bacteria.

Meso-scale turbulence in living fluids

Henricus H. Wensink, Jörn Dunkel, Sebastian Heidenreich, Knut Drescher, Raymond E. Goldstein, Hartmut Löwen, and Julia M. Yeomans

They found that a qualitative and quantitative description of observations of flow patterns, energy spectra, and velocity structure functions was given by a toy model of self-propelled rods (similar to that proposed for flocking of birds) and a minimal continuum model for incompressible flow. For the toy model, they presented a phase diagram (shown below) as a function of the volume fraction of the fluid occupied by rods and the aspect ratio of the rods. There were six distinct phases: dilute state (D), jamming (J), swarming (S), bionematic (B), turbulent (T), and laned (L). The turbulent state occurs for high filling fractions and intermediate aspect ratios, covering typical values for bacteria.


The horizontal axis is the volume fraction, going from 0 to 1.

The figure below compares the experimental data (top right) for the vorticity and the toy model (lower left) and the continuum model (lower right).

Regarding this work, Tom McLeish highlighted the importance of the identification of the relevant mesoscopic scale and the power of toy models and effective theories in the following beautiful commentary taken from his book, The Poetry and Music of Science

“Individual ‘bacteria’ are represented in this simulation by simple rod-like structures that possess just the two properties of mutual repulsion, and the exertion of a constant swimming force along their own length. The rest is simply calculation of the consequences. No more detailed account than this is taken of the complexities within a bacterium. It is somewhat astonishing that a model of the intermediate elemental structures, on such parsimonious lines, is able to reproduce the complex features of the emergent flow structure. 

Impossible to deduce inductively the salient features of the underlying physics from the fluid flow alone—creative imagination and a theoretical scalpel are required: the first to create a sufficient model of reality at the underlying and unseen scale; the second to whittle away at its rough and over-ornate edges until what is left is the streamlined and necessary model. To ‘understand’ the turbulent fluid is to have identified the scale and structure of its origins. To look too closely is to be confused with unnecessary small detail, too coarsely and there is simply an echo of unexplained patterns.”

Thursday, September 18, 2025

Confusing bottom-up and top-down approaches to emergence


Due to emergence, reality is stratified. This is reflected in the existence of semi-autonomous scientific disciplines and subdisciplines. A major goal is to understand the relationship between different strata. For example, how is chemistry related to physics? How is genetics related to cell biology?

Before describing two alternative approaches —top-down and bottom-up —I need to point out that in different fields, these terms are used in opposite senses. That can be confusing!

In the latest version of my review article on emergence, I employ the same terminology traditionally used in condensed matter physics, chemistry, and biology. It is also consistent with the use of the term “downward causation” in philosophy. 

Top-down means going from long-distance scales to short-distance scales, i.e., going down in the diagrams shown in the figure above. In contrast, in the quantum field theory of elementary particles and fields (high-energy physics), “top-down” means the opposite, i.e., going from short to long distance length scales. This is because practitioners in that field tend to draw diagrams with high energies at the top and low energies at the bottom.

Bottom-up approaches aim to answer the question: how do properties observed at the macroscale emerge from the microscopic properties of the system? 
History suggests that this question may often be best addressed by identifying the relevant mesoscale at which modularity is observed and connecting the micro- to the meso- and connecting the meso- to the macro. For example, high-energy degrees of freedom can be "integrated out" to give an effective theory for the low-energy degrees of freedom.

Top-down approaches try to surmise something about the microscopic from the macroscopic. This has a long and fruitful history, albeit probably with many false starts that we may not hear about, unless we live through them or read history books. Kepler's snowflakes are an early example. Before people were completely convinced of the existence of atoms, the study of crystal facets and of Brownian motion provided hints of the atomic structure of matter. Planck deduced the existence of the quantum from the thermodynamics of black-body radiation, i.e. from macroscopic properties. Arguably, the first definitive determination of Avogadro's number was from Perrin's experiments on Brownian motion, which involved mesoscopic measurements. Comparing classical statistical mechanics to bulk thermodynamic properties gave hints of an underlying quantum structure to reality. The Sackur-Tetrode equation for the entropy of an ideal gas hinted at the quantisation of phase space. The Gibbs paradox hinted that fundamental particles are indistinguishable. The third law of thermodynamics hints at quantum degeneracy. Pauling’s proposal for the structure of ice was based on macroscopic measurements of its residual entropy. Pasteur deduced the chirality of molecules from observations of the facets in crystals of tartaric acid. Sometimes a “top-down” approach means one that focuses on the meso-scale and ignores microscopic details.

The top-down and bottom-up approaches should not be seen as exclusive or competitive, but rather complementary. Their relative priority or feasibility depends on the system of interest and the amount of information and techniques available to an investigator. Coleman has discussed the interplay of emergence and reductionism in condensed matter. In biology, Mayr advocated a “dual level of analysis” for organisms. In social science, Schelling discussed the interplay of the behaviour of individuals and the properties of social aggregates. In a classic study of complex organisations in business, understanding this interplay was termed differentiation and integration.

I thank Jeremy Schmit for requesting clarification of this terminology.

Friday, September 12, 2025

The role of superconductivity in development of the Standard Model

In 1986, Steven Weinberg published an article, Superconductivity for Particular Theorists, in which he stated

"No one did more than Nambu to bring the idea of spontaneously broken symmetries to the attention of elementary particle physicists. And, as he acknowledged in his ground-breaking 1960 article  "Axial Current Conservation in Weak Interactions'', Nambu was guided in this work by an analogy with the theory of superconductivity,..."

In the 1960 PRL, referenced by Weinberg, Nambu states that in the BCS theory, as refined by Bogoliubov, [and Anderson]

"gauge invariance, the energy gap, and the collective excitations are logically related to each other as was shown by the author. [Y. Nambu, Phys. Rev. 117, 648 (1960)] In the present case we have only to replace them by (chiral) (gamma_5) invariance, baryon mass, and the mesons." 

This connection is worked out explicitly in two papers in 1961. The first is
Y. Nambu and G. Jona-Lasinio

They acknowledge, 

"that the model treated here is not realistic enough to be compared with the actual nucleon problem. Our purpose was to show that a new possibility exists for field theory to be richer and more complex than has been hitherto envisaged,"

Hence, I consider this to be a toy model for an emergent phenomena.


The model consists of a massless fermion field with a quartic interaction that has chiral invariance, i.e., unchanged by global gauge transformations associated with the gamma_5 matrix. (The Lagrangian is given above.) At the mean-field level, this symmetry is broken. Excitations include massless bosons (associated with the symmetry breaking and similar to those found earlier by Goldstone) and bound fermion pairs. It was conjectured that these could be analogues of mesons and baryons, respectively. The model was proposed before quarks and QCD. Now, the fermion degrees of freedom would be identified with quarks, and the model illustrates the dynamical generation of quark masses. When generalised to include SU(2) or SU(3) symmetry the model is considered to be an effective field theory for QCD, such as chiral effective theory.

Monday, September 8, 2025

Multi-step spin-state transitions in organometallics and frustrated antiferromagnetic Ising models

In previous posts, I discussed how "spin-crossover" material is a misnomer because many of these materials do not undergo crossovers but phase transitions due to collective effects. Furthermore, they exhibit rich behaviours, including hysteresis, incomplete transitions, and multiple-step transitions. Ising models can capture some of these effects.

Here, I discuss how an antiferromagnetic Ising model with frustrated interactions can give multi-step transitions. This has been studied previously by Paez-Espejo, Sy and Boukheddaden, and my UQ colleagues Jace Cruddas and Ben Powell. In their case, they start with a lattice "balls and spring" model and derive Ising models with an infinite-range ferromagnetic interaction and short-range antiferromagnetic interactions. They show that when the range of these interactions (and thus the frustration) is increased, more and more steps are observed.

Here, I do something simpler to illustrate some key physics and some subtleties and cautions.

fcc lattice

Consider the antiferromagnetic Ising model on the face-centred-cubic lattice in a magnetic field. 

[Historical trivia: the model was studied by William Shockley back in 1938, in the context of understanding alloys of gold and copper.]

The picture below shows a tetrahedron of four nearest neighbours in the fcc lattice.

Even with just nearest-neighbour interactions, the lattice is frustrated. On a tetrahedron, you cannot satisfy all six AFM interactions. Four bonds are satisfied, and two are unsatisfied.

The phase diagram of the model was studied using Monte Carlo by Kammerer et al. in 1996. It is shown above as a function of temperature and field. All the transition lines are (weakly) first-order.

The AB phase has AFM order within the [100] planes. It has an equal number of up and down spins.

The A3B phase has alternating FM and AFM order between neighbouring planes. Thus, 3/4 of the spins have the same direction as the magnetic field.

The stability of these ordered states is subtle. At zero temperature, both the AB and A3B states are massively degenerate. For a system of 4 x L^3 spins, there are 3 x 2^2L AB states, and 6 x 2^L   A3B states. At finite temperature, the system exhibits “order by disorder”.

On the phase diagram, I have shown three straight lines (blue, red, and dashed-black) representing a temperature sweep for three different spin-crossover systems. The "field" is given by h=1/2(Delta H - T Delta S). In the lower panel, I have shown the temperature dependence of the High Spin (HS) population for the three different systems. For clarity, I have not shown the effects of the hysteresis associated with the first-order transitions.

If Delta H is smaller than the values shown in the figure, then at low temperatures, the spin-crossover system will never reach the complete low-spin state.

Main points.

Multiple steps are possible even in a simple model. This is because frustration stabilises new phases in a magnetic field. Similar phenomena occur in other frustrated models, such as the triangular lattice, the J1-J2 model on a chain or a square lattice.

The number of steps may change depending on Delta S. This is because a temperature sweep traverses the field-temperature phase diagram asymmetrically.

Caution.

Fluctuations matter.
The mean-field theory phase diagram was studied by Beath and Ryan. Their phase diagram is below. Clearly, there are significant qualitative differences, particularly in the stability of the A3B phase.
The transition temperature at zero field is 3.5 J, compared to the value of 1.4J from Monte Carlo.


Monte Carlo simulations may be fraught.
Because of the many competing ordered states associated with frustration, Kammerer et al. note that “in a Monte Carlo simulation one needs unusually large systems in order observe the correct asymptotic behaviour, and that the effect gets worse with decreasing temperature because of the proximity of the phase transition to the less ordered phase at T=0”. 

Open questions.

The example above hints at what the essential physics may be how frustrated Ising models may capture it. However, to definitively establish the connection with real materials, several issues need to be resolved.

1. Show definitively how elastic interactions can produce the necessary Ising interactions. In particular, derive a formula for the interactions in terms of elastic properties of the high-spin and low-spin states. How do their structural differences, and the associated bond stretches or compressions, affect the elastic energy? What is the magnitude, range, and direction of the interactions?

[n.b. Different authors have different expressions for the Ising interactions for a range of toy models, using a range of approximations. It also needs to be done for a general atomic "force field".]

2. For specific materials, calculate the Ising interactions from a DFT-based method. Then show that the relevant Ising model does produce the steps and hysteresis observed experimentally.


2025 Nobel Prize in Physics: Macroscopic quantum effects

John Clarke, Michel H. Devoret, and John M. Martinis received the prize  “for the discovery of macroscopic quantum mechanical tunnelling an...