Wednesday, October 29, 2025

Rodney Baxter (1940-2025): Mathematical Physicist

I recently learnt that Rodney Baxter died earlier this year. He was adept at finding exact solutions to two-dimensional lattice models in statistical mechanics. He had a remarkably low public profile. But, during my lifetime, he was one of the Australian-based researchers who made the most significant and unique contributions to physics, broadly defined. Evidence of this is the list of international awards he received.

On Baxter's scientific achievements, see the obituary from the ANU, and earlier testimonials from Barry McCoy in 2000, and by Vladimir Bahzanov, on the award of the Henri Poincaré Prize to Baxter in 2021.

Exact solutions of "toy models" are important in understanding emergent phenomena. Before Onsager found an exact solution to the two-dimensional Ising model in 1944, there was debate about whether statistical mechanics could describe phase transitions and the associated discontinuities and singularities in thermodynamic quantities. 

Exact solutions provide benchmarks for approximation schemes and computational methods. They have also guided and elucidated key developments such as scaling, universality, the renormalisation group and conformal field theory.

Exact solutions guided Haldane's development of the Luttinger liquid and our understanding of the Kondo problem.

I mention the specific significance of a few of Baxter's solutions. His Exact solution of the eight-vertex model in 1972 gave continuously varying critical exponents that depended on the interaction strength in the model. This surprised many because it seemed to be against the hypothesis of the universality of critical exponents. This was later reconciled in terms of connections to the Berezinskii-Kosterlitz-Thouless transition (BKT) phase transition, which was discovered at the same time. I am not sure who explicitly resolved this.

It might be argued that Baxter independently discovered the BKT transition. For example, consider the abstract of a 1973 paper, Spontaneous staggered polarization of the F-model

"The “order parameter” of the two-dimensional F-model, namely the spontaneous staggered polarization P0, is derived exactly. At the critical temperature P0 has an essential singularity, both P0 and all its derivatives with respect to temperature vanishing."

Following earlier work by Lieb, Baxter explored the connection of two-dimensional classical models with one-dimensional quantum lattice models. For example, the solution of the XYZ quantum spin chain is related to the Eight-vertex model. Central to this is the Yang-Baxter equation. Alexander B. Zamolodchikov connected this to integrable quantum field theories in 1+1 dimensions. [Aside: the Yang is C.N. Yang, of Yang-Mills and Yang-Lee fame, who died last week.]

Baxter's work had completely unanticipated consequences beyond physics. Mathematicians discovered profound connections between his exact solutions and the theory of knots, number theory, and elliptic functions. It also stimulated the development of quantum groups.

I give two personal anecdotes on my own interactions with Baxter. I was an undergraduate at the ANU from 1979 to 1982. This meant I was completely separated from the half of the university known as the Institute for Advanced Studies (IAS), where Baxter worked. Faculty in the IAS there did no teaching, did not have to apply for external grants, and had considerable academic freedom. Most Ph.D. students were in the IAS. By today's standards, the IAS was a cushy deal, particularly if faculty did not get involved in internal politics. As an undergraduate, I really enjoyed my courses on thermodynamics, statistical mechanics, and pure mathematics. My honours supervisor, Hans Buchdahl, suggested that I talk to Baxter about possibly doing a Ph.D. with him. I found him quiet, unassuming, and unambitious. He had only supervised a few students. He wisely cautioned me that Ph.D. students might not be involved in finding exact solutions but might just be comparing exact results to series expansions.

In 1987, when I was a graduate student at Princeton, Baxter visited, hosted by Elliot Lieb, and gave a Mathematical Physics Seminar. This visit was just after he received the Dannie Heinemann Prize for Mathematical Physics from the American Physical Society. These seminars generally had a small audience, mostly people in the Mathematical Physics group. However, for Baxter, many string theorists (Witten, Callen, Gross, Harvey, ...) attended. They had a lot of questions for Baxter. But, from my vague recollection, he struggled to answer them, partly because he wasn't familiar with the language of quantum field theory. 

I was told that he got nice job offers from the USA. He could have earned more money and achieved a higher status. For personal reasons, he turned down the offer of a Royal Society Research Professorship at Cambridge.  But he seemed content puttering away in Australia. He just loved solving models and enjoyed family life down under.

Baxter wrote a short autobiography, An Accidental Academic. He began his career and made his big discoveries in a different era in Australian universities. The ANU had generous and guaranteed funding. Staff had the freedom to pursue curiosity-driven research on difficult problems that might take years to solve. There was little concern with the obsessions of today: money, metrics, management, and marketing. It is wonderful that Baxter was able to do what he did. It is striking that he says he retired early so he would not have to start making grant applications!

Saturday, October 25, 2025

Can AI solve quantum-many body problems?

I find it difficult to wade through all the hype about AI, along with the anecdotes about its failings to reliably answer basic questions.

Gerard Milburn kindly brought to my attention a nice paper that systematically addresses whether AI is useful as an aid (research assistant) for solving basic (but difficult) problems that researchers in condensed matter theorists care about.

CMT-Benchmark: A Benchmark for Condensed Matter Theory Built by Expert Researchers

The abstract is below.

My only comment is one of perspective. Is the cup half full or half empty? Do we emphasise the failures or the successes?

The optimists among us will claim that the success in solving a smaller number of these difficult problems shows the power and potential of AI. It is just a matter of time before LLMs can solve most of these problems, and we will see dramatic increases in research productivity (defined as the amount of time taken to complete a project).

The pessimists and skeptically oriented will claim that the failures highlight the limitations of AI, particularly when training data sets are small. We are still a long way from replacing graduate students with AI bots (or at least using AI to train students in the first year of their PhD).

What do you think? Should this study lead to optimism, pessimism, or just wait and see?

----------

Large language models (LLMs) have shown remarkable progress in coding and math problem-solving, but evaluation on advanced research-level problems in hard sciences remains scarce. To fill this gap, we present CMT-Benchmark, a dataset of 50 problems covering condensed matter theory (CMT) at the level of an expert researcher. Topics span analytical and computational approaches in quantum many-body, and classical statistical mechanics. The dataset was designed and verified by a panel of expert researchers from around the world. We built the dataset through a collaborative environment that challenges the panel to write and refine problems they would want a research assistant to solve, including Hartree-Fock, exact diagonalization, quantum/variational Monte Carlo, density matrix renormalization group (DMRG), quantum/classical statistical mechanics, and model building. We evaluate LLMs by programmatically checking solutions against expert-supplied ground truth. We developed machine-grading, including symbolic handling of non-commuting operators via normal ordering. They generalize across tasks too. Our evaluations show that frontier models struggle with all of the problems in the dataset, highlighting a gap in the physical reasoning skills of current LLMs. Notably, experts identified strategies for creating increasingly difficult problems by interacting with the LLMs and exploiting common failure modes. The best model, GPT5, solves 30\% of the problems; average across 17 models (GPT, Gemini, Claude, DeepSeek, Llama) is 11.4±2.1\%. Moreover, 18 problems are solved by none of the 17 models, and 26 by at most one. These unsolved problems span Quantum Monte Carlo, Variational Monte Carlo, and DMRG. Answers sometimes violate fundamental symmetries or have unphysical scaling dimensions. We believe this benchmark will guide development toward capable AI research assistants and tutors.

Monday, October 20, 2025

Undergraduates need to learn about the Ising model

A typical undergraduate course on statistical mechanics is arguably misleading because (unintentionally) it does not tell students several important things (related to one another).

Statistical mechanics is not just about how to calculate thermodynamic properties of a collection of non-interacting particles.

A hundred years ago, many physicists did not believe that statistical mechanics could describe phase transitions. Arguably, this lingering doubt only ended fifty years ago with Wilson's development of renormalisation group theory.

It is about emergence: how microscopic properties are related to macroscopic properties.

Leo Kadanoff commented, "Starting around 1925, a change occurred: With the work of Ising, statistical mechanics began to be used to describe the behaviour of many particles at once."

When I came to UQ 25 years ago, I taught PHYS3020 Statistical Mechanics a couple of times. To my shame, I never discussed the Ising model. There is a nice section on it in the course textbook, Thermal Physics: An Introduction, by Daniel Schroeder. I guess I did not think there was time to "fit it in" and back then, I did not appreciate how important the Ising model is. This was a mistake.

Things have changed for the better due to my colleagues Peter Jacobson and Karen Kheruntsyan. They now include one lecture on the model, and students complete a computational assignment in which they write a Monte Carlo code to simulate the model.

This year, I am giving the lecture on the model. Here are my slides  and what I will write on the whiteboard or document viewer in the lecture.

Friday, October 17, 2025

One hundred years of Ising

In 1925, Ising published his paper on the solution of the model in one dimension. An English translation is here.https://www.hs-augsburg.de/~harsch/anglica/Chronology/20thC/Ising/isi_fm00.html

Coincidentally, next week I am giving a lecture on the Ising model to an undergraduate class in statistical mechanics. To flesh out the significance and relevance of the model, here are some of the interesting articles I have been looking at:

The Ising model celebrates a century of interdisciplinary contributions, Michael W. Macy, Boleslaw K. Szymanski and Janusz A. Hołyst

This mostly discusses the relevance of the model to understanding basic problems in sociology, including its relation to the classic Schelling model for social segregation.

The Ising model: highlights and perspectives, Christof Külske

This mostly discusses how the model is central to some work in mathematical physics and probability theory.

The Fate of Ernst Ising and the Fate of his Model, Thomas Ising, Reinhard Folk, Ralph Kennac, Bertrand Berche, Yurij Holovatche.

This includes some nice memories of Ising from his son, Thomas.

Aside: I wanted a plot of the specific heat for the one-dimensional model. According to Google AI "In a 1D Ising model with no external magnetic field, the specific heat is zero at all temperatures."

Wednesday, October 8, 2025

2025 Nobel Prize in Physics: Macroscopic quantum effects

John Clarke, Michel H. Devoret, and John M. Martinis received the prize  “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit.”

The work was published in three papers in PRL in 1984 and 1985. The New York Times has a nice discussion of the award, including comments from Clarke, Martinis, Tony Leggett, and Steve Girvin.

There is some rich, subtle, and beautiful physics here. As a theorist, I comment on the conceptual and theoretical side, but don't want to minimise that doing the experiments was a technical breakthrough.

The experiments were directly stimulated by Tony Leggett, who, beginning in the late 70s, championed the idea that Josephson junctions and SQIDs could be used to test whether quantum mechanics was valid at the macroscopic level. Many in the quantum foundations community were sceptical. Leggett and Amir Caldeira, performed some beautiful, concrete, realistic calculations of the effect of decoherence and dissipation on quantum tunneling in SQUIDs. The results suggested that macroscopic tunneling should be observable.

Aside: Leggett rightly received a Nobel in 2003 for his work on the theory of superfluid 3He. Nevertheless, I believe his work on quantum foundations is even more significant.

Subtle point 1. What do we mean by a macroscopic quantum state?

It is commonly said that superconductors and superfluids are in a macroscopic quantum state. Signatures are the quantisation of magnetic flux in a superconducting cylinder and how the current through a Josephson junction oscillates as a function of the magnetic flux through the junction. I discuss this in the chapter on Quantum Matter in my Very Short Introduction.

Leggett argued that these experiments are explained by the Josephson equations, which treat the phase of the superconducting order parameter as a classical variable. For example, in a SQUID, it satisfies a classical dynamical equation. 

If the state is truly quantum, then the phase variable should be quantised.

Aside: a nice microscopic derivation, starting from BCS theory and using path integrals, of the effective action to describe the quantum dynamics was given in 1982 by Vinay Ambegaokar, Ulrich Eckern, Gerd Schön

Subtle point 2. There are different signatures of quantum theory: energy level quantisation, tunnelling, coherence (interference), and entanglement.

In 1984-5, Clarke, DeVoret, and Martinis observed the first two. Macroscopic quantum coherence is harder to detect and was only observed in 2000. 

In a nice autobiographical article
Leggett commented in 2020,
Because of the strong prejudice in the quantum foundations community that it would never be possible to demonstrate characteristically quantum-mechanical effects at the macroscopic level, this assertion made us [Leggett and Garg, 1985] the target of repeated critical comments over the next few years. Fortunately, our experimental colleagues were more open-minded, and several groups started working toward a meaningful experiment along the lines we had suggested, resulting in the first demonstrations (29, 30) of MQC [Macroscopic Quantum Coherence] in rf SQUIDs (by then rechristened flux qubits) at the turn of the century. However, it would not be until 2016 that an experiment along the lines we had suggested (actually using a rather simpler protocol than our original one) was carried out (31) and, to my mind, definitively refuted macrorealism at that level.  
I find it rather amusing that nowadays the younger generation of experimentalists in the superconducting qubit area blithely writes papers with words like “artificial atom” in their titles, apparently unconscious of how controversial that claim once was.

Two final comments on the sociology side.

Superconductivity and superfluidity have now been the basis for Nobel Prizes in six years and four years, respectively.

The most widely cited of the three PRLs that were the basis of the Prize is the one on quantum tunnelling with about 500 citations on Google Scholar. (In contrast, Devoret has more than 20 other papers that are more widely cited). From 1986 to 1992 it was cited about a dozen times per year. Between 1993 and 2001 is was only cited a total of 30 times. Since, 2001 is has been cited about 20 times per year.

This is just one more example of how citation rates are a poor measure of the significance of work and a predictor of future success.

Monday, October 6, 2025

Nobel Prize predictions for 2025

 This week Nobel Prizes will be announced. I have not done predictions since 2020. This is a fun exercise. It is also good to reflect on what has been achieved, including outside our own areas, and big advances from the past we may now take for granted.

Before writing this I looked at suggestions from readers of Doug Natelson's blog, nanoscale views, an article in Physics World, predictions from Clarivate based on citations, and recent recipients of the Wolf Prize.

Please enter you own predictions below.

Although we know little about how the process actually works or the explicit criteria used, I have a few speculative suggestions and observations.

1. The Wolf Prize is often a precursor.

2. Every now and then, they seem to surprise us.

3. Every few years, the physics committee seems to go for something technological, sometimes arguably outside physics, perhaps to remind people how important physics is to modern technology and other areas of science.

4. They seem to spread the awards around between different areas of physics.

5. Theory only gets awards when it has led to well-established experimental observations. Brilliant theoretical discoveries that motivate large research enterprises (more theory and experimental searches) are good enough. This is why predictions based on citation numbers may be misleading.

6. Once an award has been made on one topic, it is unlikely that there will be another award for a long time, if ever, on that same topic. In other words, there is a high bar for a second award.

7. I don't think the logic is to pick an important topic and then choose who should get the prize for the topic. This approach works against topics where many researchers independently made contributions that were all important. The awardee needs to be a standout who won't be a debatable choice.

What do you think of these principles?

For some of the above reasons, I discuss below why I am sceptical about some specific predictions.

My top prediction for physics is Metamaterials with negative refractive index, going to John Pendry (theory) and David Smith (experiment). This is a topic I know little about.

Is it just a matter of time before twisted bilayer graphene wins a prize? This might go to Allan MacDonald (theory) and Pablo Jarillo-Herrero (experiment). They recently received a Wolf Prize. One thing that convinced me of the importance of this discovery was a preprint on moiré WSe2 with beautiful phase diagrams such as this one.


The level of control is truly amazing. Helpful background is the recent Physics Today article by Bernevig and Efetov.

This is big enough to overcome 6. and the earlier prize for graphene.

Unfortunately, my past prediction/wish of Kondo and heavy fermions won't happen as Jun Kondo died in 2022. This suggestion also always went against Principle 6, with the award to Ken Wilson citing his solution of the Kondo problem.

The prediction of Berry and Aharonov for topological phases in quantum mechanics is reasonable, except for questions about historical precursors.

The prediction of topological insulators is going against 6. and the award to Haldane in 2016.

Clarivate's predictions of DiVincenzo and Loss (for qubits based on electron spin in quantum dots) goes against 5. and 7. It is just one of many competing proposals for a scaleable quantum computer and a large-scale device is still elusive.

Predictions of a prize for quantum algorithms (Shor, Deutsch, Brassard, Bennett) go against 5. 

Chemistry 

I don't know enough chemistry to make meaningful predictions. On the other hand, in 2019 I did correctly predicted John Goodenough for lithium batteries.  I do like the prediction from Clarivate for Biomolecular condensates (Brangwynne, Hyman, and Rosen). I discussed them briefly in my review article on emergence.

What do you think about my 7 "principles"?

What are your predictions?

Tuesday, September 30, 2025

Elastic frustration in molecular crystals

Crystals of large molecules exhibit diverse structures. In other words, the geometric arrangements of the molecules relative to one another are complex. Given a specific molecule, theoretically predicting its crystal structure is a challenge and is the basis of a competition.

One of the reasons the structures are rich and the theoretical problem is so challenging is that there are typically many different interactions between different molecules, including electrostatic, hydrogen bonding, pi-pi,...

Another challenge is to understand the elastic and plastic properties of the crystals.

Some of my UQ colleagues recently published a paper that highlights some of the complexity.

Origins of elasticity in molecular materials

Amy J. Thompson, Bowie S. K. Chong, Elise P. Kenny, Jack D. Evans, Joshua A. Powell, Mark A. Spackman, John C. McMurtrie, Benjamin J. Powell, and Jack K. Clegg

They used calculations based on Density Functional Theory (DFT) to separate the contributions to the elasticity from the different interactions between the molecules. The figure below shows the three dominant interactions in the family of crystals that they consider.

The figure below shows the energy of interaction between a pair of molecules for the different interactions.
Note the purple vertical bar, which is the value of the coordinate in the equilibrium geometry of the whole crystal. The width of the bar represents variations in both lengths that occur in typical elastic experiments.
What is striking to me is the large difference between the positions of the potential minima for the individual interactions and the minima for the combined interactions.

This is an example of frustration: it is not possible to simultaneously minimise the energy of all the individual pairwise interactions. They are competing with one another.

A toy model illustrates the essential physics. I came up with this model partly motivated by similar physics that occurs in "spin-crossover" materials.


The upper (lower) spring has equilibrium length a (b) and spring constant k (k'). In the harmonic approximation, the total elastic energy is

The equilibrium separation of the two molecules is given by

which is intermediate between a + 2R and b. This illustrates the elastic frustration. Neither of the springs (bonds) is at its optimum length.

The system is stable provided that k + k' is positive. Thus, it is not necessary that both k and k' be positive. The possibility that one of the k's is negative is relevant to reality. Thompson et al. showed that the individual molecular interaction energies are described by Morse potentials. If one is far enough from the minimum of the potential, the local curvature can be negative. 

Rodney Baxter (1940-2025): Mathematical Physicist

I recently learnt that Rodney Baxter died earlier this year. He was adept at finding exact solutions to two-dimensional lattice models in st...