Tuesday, November 25, 2025

Elastic interactions and complex patterns in binary systems

One of the many beauties of condensed matter physics is that it can reveal and illuminate how two systems or phenomena that at first appear to be quite different actually involve similar physics. This is an example of universality: for emergent phenomena, many details don't really matter. One example is the similarities between superconductivity and superfluidity. A consequence of universality is that the same concepts, techniques, toy models, and effective theories can be used to describe a wide range of systems.

The complex organometallic molecules, known by the misnomer "spin crossover" compounds, exhibit a rich range of phase transitions and types of spatial order. Key aspects of the physics are the following.

  • Each transition metal ion can be in one of two possible states: low-spin or high-spin. 
  • The size of each molecular complex depends on the spin state.
  • Consequently, the molecules interact with their neighbours via elastic interactions.

A toy model that can describe this is expanding balls connected by springs. Various versions of this type of model are reviewed here. The simplest version is the chain model below.

It turns out there are other classes of systems described by similar models. As far as I am aware, this was first pointed out in Consequences of Lattice Mismatch for Phase Equilibrium in Heterostructured Solids Layne B. Frechette, Christoph Dellago, Phillip L. Geissler

That paper is motivated by experiments on the growth of semiconductor quantum dots, by ion exchange, such as when CdSe is bathed in an Ag-rich solution and Ag2Se is produced with heterostructures (i.e., patterns of Ag and Se ions) that are different from the bulk crystal.

They consider the balls and springs model above on a triangular lattice.

They also point out how similar physics is relevant to binary metal alloys, e.g, AgCu, citing 

Ising model for phase separation in alloys with anisotropic elastic interaction—I. Theory, P. Fratzl and O. Penrose

Those authors consider a square lattice with elastic interactions associated with bond stretching along the edges and diagonals of the squares and bending of the square angles.

Frechette et al. also mention experiments on thin films of  DNA modified metallic nanoparticles. Compared to atomic systems these can tolerate larger lattice-mismatch before the formation of defects due to lattice strain.

Other systems (not mentioned) described by similar Ising models are metal-hydrogen systems, where the Ising pseudospin signifies whether a hydrogen atom is present at a particular site in the metallic crystal.

Frechette et al. start with the ball and springs model and "integrate out" the springs to obtain an effective Hamiltonian, which is an Ising model.


The spatial range of the interaction between Ising spins is shown in the colour-shaded plot below.
The interaction has two components.
One is an infinite range "ferromagnetic" part, seen as the light blue below.
The second is a short-range interaction which is mostly "antiferromagnetic" (i.e., red), but extends over several lattice sites. (Note, this interaction will be frustrated on the triangular lattice).



Using this toy model, Frechette et al. can obtain complex patterns (heterostructures) similar to those seen in quantum dots grown by ion exchange.

There is some subtle (and confusing) physics associated with deriving the Ising model from the ball and springs model. 

Due to the long-range nature of elastic interactions, the boundary conditions matter. 

The infinite range part of the Ising interaction arises from dealing with the lattice constant for the crystal, depending on the net "magnetisation" of the "spins". But that is a story for another day.

Monday, November 10, 2025

Why is the state of universities such an emotional issue for me?

It all about values!

Universities have changed dramatically over the course of my lifetime. Australian universities are receiving increasing media attention due to failures in management and governance. But there is a lot more to the story, particularly at the grassroots level, of the everyday experience of students and faculty. It is all about the four M's: management, marketing, metrics, and money. Learning, understanding, and discovering things for their own sake is alien and marginalised. I have stopped writing posts about this. So why come back to it?

I am often struck how emotional this issue is for me and how hard it is to sometimes talk about it, particularly with those with a different view from me. Writing blog posts (e.g. this one) about it has been a somewhat constructive outlet, rather than exploding in anger at an overpaid and unqualified "manager" or one of their many multiplying minions.

A few weeks ago, I listened to three public lectures by the Australian historian Peter Harrison. [He is my former UQ colleague. We are now both Emeritus. I benefited from excellent seminars he ran at UQ, some of which I blogged about].

The lectures helped me understand what has happened to universities and also why it is a sensitive subject for me. Briefly, it is all about values and virtues.

The lectures are nicely summarised by Peter in the short article, 

How our universities became disenchanted: Secularisation, bureaucracy and the erosion of value

Reading the article rather than this blog post is recommended. I won't try and summarise it, but rather highlight a few points and then make some peripheral commentary.

I agree with Peter's descriptions of the problems we see on the surface (bureaucracy, metrics, and management features significantly). His lectures are a much deeper analysis of underlying cultural changes and shifting worldviews that have occurred over centuries, leading universities to evolve into their current mangled form.

A few things to clarify to avoid potential misunderstanding of Peter's arguments.

Secularisation is defined broadly. It does not just refer to the decline in the public influence of Christianity in the Western world. It is also about Greek philosophy, particularly Aristotle, and the associated emphasis on virtues and transcendence. Peter states:

"The intrinsic motivations of teachers, researchers and scholars can be understood in terms of virtues or duties. According to virtue ethics, the “good” of an activity is related to the way it leads to a cultivation and expression of particular virtues. These, in turn, are related to a particular conception of natural human ends or goals. (Aristotle’s understanding of human nature, which informs virtue ethics, proposes that human beings are naturally oriented towards knowledge, and that they are fulfilled as persons to the extent that they pursue those goals and develop the requisite intellectual virtues.)"

The virtue ethics of Aristotle [and Alisdair MacIntyre] conflicts with competing ethical visions, including duty-oriented (deontological) ethics, consequentialist ethics, and particularly utilitarianism. This led to a shift away from intrinsic goods to what things are "good for", i.e., what practical outcomes they produce. For example, is scientific research "good" and have "value" because it cultivates curiousity, awe, and wonder, or because it will lead to technology that will stimulate economic growth?

Peter draws significantly on Max Weber's ideas about secularisation, institutions, and authority. Weber argued that a natural consequence of secularisation was disenchantment (the loss of magic in the world). This is not simply "people believe in science rather than magic". Disenchantment is a loss of a sense of awe, wonder, and mystery.

Now, a few peripheral responses to the lectures.

Is secularisation the dominant force that has created these problems for universities? In question time, Peter was asked whether capitalism was more important. i.e., universities are treated as businesses and students as customers? He agreed that capitalism is a factor but also pointed out how Weber emphasised that capitalism was connected to the secularising effects of the Protestant Reformation.

 I think that two other factors to consider are egalitarianism and opportunism. These flow from universities being "victims" of their own success. Similar issues may also be relevant to private schools, hospitals, and charities. They have often been founded by people of "charisma" [in the sense used by Weber] motivated by virtue ethics. Founders were not concerned with power, status, or money. What they were doing had intrinsic value to them and was "virtuous". In the early stages, these institutions attracted people with similar ideals. The associated energy, creativity, and common vision led to "success." Students learnt things, patients got healed, and poverty was alleviated. But, this success attracted attention and  the institution then had power, money, status, and influence.

The opportunists then move in. They are attracted to the potential to share in the power, money, status, and influence. The institution then takes on a life of its own, and the ideals and virtue ethics of the founders are squeezed out. In some sense, opportunism might be argued to be a consequence of secularisation. 

[Aside: two old posts considered a similar evolution, motivated by a classic article about the development of businesses.]

One indicator of the "success" of universities is how their graduates join the elite and hold significant influence in society. [Aside: ignoring the problem of distinguishing correlation and causality. Do universities actually train students well or just select those who will succeed anyway?]  Before (around) 1960, (mostly) only the children of the elite got to attend university. Demands arose that more people should have access to this privilege. This led to "massification" and an explosion in the number of students, courses, and institutions. This continues today, globally. Associated with this was more bureaucracy. Furthermore, the "iron triangle" of cost, access, and quality presents a challenge for this egalitarianism. If access increases, so does cost and quality decreases, unless you spend even more. It is wonderful that universities have become more diverse and accessible. On the other hand, I fear that for every underprivileged student admitted whose mind is expanded and life enriched, many more rich, lazy, and entitled students suck the life out of the system.

Metrics are pseudo-rational

Peter rightly discussed how the proliferation of the use of metrics to measure value is problematic, and reflects the "rationalisation" associated with bureaucracy (described by Weber). Even if one embraces the idea that "rational" and "objective" assessment is desirable, my observation is that in practice, metrics are invariably used in an irrational way. For example, managers look at the impact factor of journals, but are blissfully oblivious to the fact that the citation distribution for any journal is so broad and with a long tail that the mean number is meaningless. The underlying problem is that too many of the people doing assessments suffer from some mixture of busyness, intellectual laziness, and arrogance. Too many managers are power hungry and want to make the decisions themselves, and don't trust faculty who actually may understand the intellectual merits and weaknesses of the work being assessed.

The problems are just as great for the sciences as the humanities

On the surface, the humanities are doing worse than the sciences. For example, if you look at declining student numbers, threats of job cuts, political criticism, and status within the university. This is because science is associated with technology which is associated with jobs and economic growth. However, if you look at pure science that is driven by curiousity, awe, and wonder, then one should be concerned. There is an aversion to attacking difficult and risky problems, particularly those that require long-term investment or have been around for a while. The emphasis is on low-lying fruit and the latest fashion. Almost all physics and chemistry research is framed in terms of potential applications, not fundamental understanding. Sometimes I feel some of my colleagues are doing engineering not physics. In a similar vein, biochemists frame research in terms of biomedical applications, not the beauty and wonders of how biological systems work. 

Are universities destined for bureaucratic self-destruction?

Provocatively, Peter considered the potential implications of the arguments of historian and anthropologist Joseph Tainter concerning the collapse of complex societies. On the technical side, this reminded me of a famous result in ecology by Robert May, that as the complexity of a system (the number of components and interactions) increases, it can become unstable.

I don't think universities as institutions will collapse. They are too integrated into the fabric of modern capitalism. What may collapse is the production of well-educated (in the Renaissance sense) graduates and research that is beautiful, original, and awe-inspiring. This leads naturally into the following question.

Is the age of great discoveries over?

Peter briefly raised this issue. On the one hand, we are victims of our own success. It is amazing how much we now know and understand. Hence, it is harder to discover truly new and amazing things. On the other hand, because of emergence we should expect surprises.

There is hope on the margins

Peter did not just lament the current situation but made some concrete suggestions for addressing the problems, even though we are trapped in Weber's "iron cage" of bureaucracy.

  • Re-balancing the structures of authority
  • Finding a place for values discourse in the universities
  • Develop ways of resolving differences with a sense of the rationality of Alisdair MacIntyre in mind
On the first, I note the encouraging work of the ANU Governance Project.

Peter also encouraged people to work on the margins. I also think that this is where the most significant scholarship and stimulus for reform will happen. A nice example is the story that Malcolm Gladwell tells in a podcast episode, The Obscure Virus Club.




Monday, November 3, 2025

Overdoped cuprates are not Fermi liquids

They are anisotropic marginal Fermi liquids.

A commenter on my recent AI blog post mentioned the following preprint, with a very different point of view.

Superconductivity in overdoped cuprates can be understood from a BCS perspective!

B.J. Ramshaw, Steven A. Kivelson

The authors claim:

" a theoretical understanding of the "essential physics" is achievable in terms of a conventional Fermi-liquid treatment of the normal state...

...observed features of the overdoped materials that are inconsistent with this perspective can be attributed to the expected effects of the intrinsic disorder associated with most of the materials being solid state solutions"

On the latter point, they mention two papers that found the resistivity versus temperature can have a linear component. But there is much more.

The authors appear unaware of the experimental data and detailed theoretical analysis showing that the overdoped cuprates are anisotropic marginal Fermi liquids. 

Angle-dependent magnetoresistance measurements by Nigel Hussey's group, reported in 2006, were consistent with a Fermi surface anisotropy in the scattering rate.

Papers in 2011 and 2012 pushed the analysis further.

Consistent Description of the Metallic Phase of Overdoped Cuprate Superconductors as an Anisotropic Marginal Fermi Liquid, J. Kokalj and Ross H. McKenzie

Transport properties of the metallic state of overdoped cuprate superconductors from an anisotropic marginal Fermi liquid model, J. Kokalj, N. E. Hussey, and Ross H. McKenzie 

The self-energy is the sum of two terms with characteristic dependencies on temperature, frequency, location on the Fermi surface, and doping. The first term is isotropic over the Fermi surface, independent of doping, and has the frequency and temperature dependence characteristic of a Fermi liquid. 

The second term is anisotropic over the Fermi surface (vanishing at the same points as the superconducting energy gap), strongly varies with doping (scaling roughly with 𝑇𝑐, the superconducting transition temperature), and has the frequency and temperature dependence characteristic of a marginal Fermi liquid. 

The first paper showed that this self-energy can describe a range of experimental data including angle-dependent magnetoresistance and quasiparticle renormalizations determined from specific heat, quantum oscillations, and angle-resolved photoemission spectroscopy. 

The second paper, showed, without introducing new parameters and neglecting vertex corrections, that this model self-energy can give a quantitative description of the temperature and doping dependence of a range of reported transport properties of Tl2Ba2CuO6+𝛿 samples. These include the intralayer resistivity, the frequency-dependent optical conductivity, the intralayer magnetoresistance, and the Hall coefficient. The temperature dependence of the latter two are particularly sensitive to the anisotropy of the scattering rate and to the shape of the Fermi surface.

For a summary of all of this, see slides from a talk I gave at Stanford back in 2013.

I am curious whether the authors can explain the anisotropic part of the self-energy in terms of disorder in samples.

Wednesday, October 29, 2025

Rodney Baxter (1940-2025): Mathematical Physicist

I recently learnt that Rodney Baxter died earlier this year. He was adept at finding exact solutions to two-dimensional lattice models in statistical mechanics. He had a remarkably low public profile. But, during my lifetime, he was one of the Australian-based researchers who made the most significant and unique contributions to physics, broadly defined. Evidence of this is the list of international awards he received.

On Baxter's scientific achievements, see the obituary from the ANU, and earlier testimonials from Barry McCoy in 2000, and by Vladimir Bahzanov, on the award of the Henri Poincaré Prize to Baxter in 2021.

Exact solutions of "toy models" are important in understanding emergent phenomena. Before Onsager found an exact solution to the two-dimensional Ising model in 1944, there was debate about whether statistical mechanics could describe phase transitions and the associated discontinuities and singularities in thermodynamic quantities. 

Exact solutions provide benchmarks for approximation schemes and computational methods. They have also guided and elucidated key developments such as scaling, universality, the renormalisation group and conformal field theory.

Exact solutions guided Haldane's development of the Luttinger liquid and our understanding of the Kondo problem.

I mention the specific significance of a few of Baxter's solutions. His Exact solution of the eight-vertex model in 1972 gave continuously varying critical exponents that depended on the interaction strength in the model. This surprised many because it seemed to be against the hypothesis of the universality of critical exponents. This was later reconciled in terms of connections to the Berezinskii-Kosterlitz-Thouless transition (BKT) phase transition, which was discovered at the same time. I am not sure who explicitly resolved this.

It might be argued that Baxter independently discovered the BKT transition. For example, consider the abstract of a 1973 paper, Spontaneous staggered polarization of the F-model

"The “order parameter” of the two-dimensional F-model, namely the spontaneous staggered polarization P0, is derived exactly. At the critical temperature P0 has an essential singularity, both P0 and all its derivatives with respect to temperature vanishing."

Following earlier work by Lieb, Baxter explored the connection of two-dimensional classical models with one-dimensional quantum lattice models. For example, the solution of the XYZ quantum spin chain is related to the Eight-vertex model. Central to this is the Yang-Baxter equation. Alexander B. Zamolodchikov connected this to integrable quantum field theories in 1+1 dimensions. [Aside: the Yang is C.N. Yang, of Yang-Mills and Yang-Lee fame, who died last week.]

Baxter's work had completely unanticipated consequences beyond physics. Mathematicians discovered profound connections between his exact solutions and the theory of knots, number theory, and elliptic functions. It also stimulated the development of quantum groups.

I give two personal anecdotes on my own interactions with Baxter. I was an undergraduate at the ANU from 1979 to 1982. This meant I was completely separated from the half of the university known as the Institute for Advanced Studies (IAS), where Baxter worked. Faculty in the IAS there did no teaching, did not have to apply for external grants, and had considerable academic freedom. Most Ph.D. students were in the IAS. By today's standards, the IAS was a cushy deal, particularly if faculty did not get involved in internal politics. As an undergraduate, I really enjoyed my courses on thermodynamics, statistical mechanics, and pure mathematics. My honours supervisor, Hans Buchdahl, suggested that I talk to Baxter about possibly doing a Ph.D. with him. I found him quiet, unassuming, and unambitious. He had only supervised a few students. He wisely cautioned me that Ph.D. students might not be involved in finding exact solutions but might just be comparing exact results to series expansions.

In 1987, when I was a graduate student at Princeton, Baxter visited, hosted by Elliot Lieb, and gave a Mathematical Physics Seminar. This visit was just after he received the Dannie Heinemann Prize for Mathematical Physics from the American Physical Society. These seminars generally had a small audience, mostly people in the Mathematical Physics group. However, for Baxter, many string theorists (Witten, Callen, Gross, Harvey, ...) attended. They had a lot of questions for Baxter. But, from my vague recollection, he struggled to answer them, partly because he wasn't familiar with the language of quantum field theory. 

I was told that he got nice job offers from the USA. He could have earned more money and achieved a higher status. For personal reasons, he turned down the offer of a Royal Society Research Professorship at Cambridge.  But he seemed content puttering away in Australia. He just loved solving models and enjoyed family life down under.

Baxter wrote a short autobiography, An Accidental Academic. He began his career and made his big discoveries in a different era in Australian universities. The ANU had generous and guaranteed funding. Staff had the freedom to pursue curiosity-driven research on difficult problems that might take years to solve. There was little concern with the obsessions of today: money, metrics, management, and marketing. It is wonderful that Baxter was able to do what he did. It is striking that he says he retired early so he would not have to start making grant applications!

Saturday, October 25, 2025

Can AI solve quantum-many body problems?

I find it difficult to wade through all the hype about AI, along with the anecdotes about its failings to reliably answer basic questions.

Gerard Milburn kindly brought to my attention a nice paper that systematically addresses whether AI is useful as an aid (research assistant) for solving basic (but difficult) problems that researchers in condensed matter theorists care about.

CMT-Benchmark: A Benchmark for Condensed Matter Theory Built by Expert Researchers

The abstract is below.

My only comment is one of perspective. Is the cup half full or half empty? Do we emphasise the failures or the successes?

The optimists among us will claim that the success in solving a smaller number of these difficult problems shows the power and potential of AI. It is just a matter of time before LLMs can solve most of these problems, and we will see dramatic increases in research productivity (defined as the amount of time taken to complete a project).

The pessimists and skeptically oriented will claim that the failures highlight the limitations of AI, particularly when training data sets are small. We are still a long way from replacing graduate students with AI bots (or at least using AI to train students in the first year of their PhD).

What do you think? Should this study lead to optimism, pessimism, or just wait and see?

----------

Large language models (LLMs) have shown remarkable progress in coding and math problem-solving, but evaluation on advanced research-level problems in hard sciences remains scarce. To fill this gap, we present CMT-Benchmark, a dataset of 50 problems covering condensed matter theory (CMT) at the level of an expert researcher. Topics span analytical and computational approaches in quantum many-body, and classical statistical mechanics. The dataset was designed and verified by a panel of expert researchers from around the world. We built the dataset through a collaborative environment that challenges the panel to write and refine problems they would want a research assistant to solve, including Hartree-Fock, exact diagonalization, quantum/variational Monte Carlo, density matrix renormalization group (DMRG), quantum/classical statistical mechanics, and model building. We evaluate LLMs by programmatically checking solutions against expert-supplied ground truth. We developed machine-grading, including symbolic handling of non-commuting operators via normal ordering. They generalize across tasks too. Our evaluations show that frontier models struggle with all of the problems in the dataset, highlighting a gap in the physical reasoning skills of current LLMs. Notably, experts identified strategies for creating increasingly difficult problems by interacting with the LLMs and exploiting common failure modes. The best model, GPT5, solves 30\% of the problems; average across 17 models (GPT, Gemini, Claude, DeepSeek, Llama) is 11.4±2.1\%. Moreover, 18 problems are solved by none of the 17 models, and 26 by at most one. These unsolved problems span Quantum Monte Carlo, Variational Monte Carlo, and DMRG. Answers sometimes violate fundamental symmetries or have unphysical scaling dimensions. We believe this benchmark will guide development toward capable AI research assistants and tutors.

Monday, October 20, 2025

Undergraduates need to learn about the Ising model

A typical undergraduate course on statistical mechanics is arguably misleading because (unintentionally) it does not tell students several important things (related to one another).

Statistical mechanics is not just about how to calculate thermodynamic properties of a collection of non-interacting particles.

A hundred years ago, many physicists did not believe that statistical mechanics could describe phase transitions. Arguably, this lingering doubt only ended fifty years ago with Wilson's development of renormalisation group theory.

It is about emergence: how microscopic properties are related to macroscopic properties.

Leo Kadanoff commented, "Starting around 1925, a change occurred: With the work of Ising, statistical mechanics began to be used to describe the behaviour of many particles at once."

When I came to UQ 25 years ago, I taught PHYS3020 Statistical Mechanics a couple of times. To my shame, I never discussed the Ising model. There is a nice section on it in the course textbook, Thermal Physics: An Introduction, by Daniel Schroeder. I guess I did not think there was time to "fit it in" and back then, I did not appreciate how important the Ising model is. This was a mistake.

Things have changed for the better due to my colleagues Peter Jacobson and Karen Kheruntsyan. They now include one lecture on the model, and students complete a computational assignment in which they write a Monte Carlo code to simulate the model.

This year, I am giving the lecture on the model. Here are my slides  and what I will write on the whiteboard or document viewer in the lecture.

Friday, October 17, 2025

One hundred years of Ising

In 1925, Ising published his paper on the solution of the model in one dimension. An English translation is here.https://www.hs-augsburg.de/~harsch/anglica/Chronology/20thC/Ising/isi_fm00.html

Coincidentally, next week I am giving a lecture on the Ising model to an undergraduate class in statistical mechanics. To flesh out the significance and relevance of the model, here are some of the interesting articles I have been looking at:

The Ising model celebrates a century of interdisciplinary contributions, Michael W. Macy, Boleslaw K. Szymanski and Janusz A. Hołyst

This mostly discusses the relevance of the model to understanding basic problems in sociology, including its relation to the classic Schelling model for social segregation.

The Ising model: highlights and perspectives, Christof Külske

This mostly discusses how the model is central to some work in mathematical physics and probability theory.

The Fate of Ernst Ising and the Fate of his Model, Thomas Ising, Reinhard Folk, Ralph Kennac, Bertrand Berche, Yurij Holovatche.

This includes some nice memories of Ising from his son, Thomas.

Aside: I wanted a plot of the specific heat for the one-dimensional model. According to Google AI "In a 1D Ising model with no external magnetic field, the specific heat is zero at all temperatures."

Wednesday, October 8, 2025

2025 Nobel Prize in Physics: Macroscopic quantum effects

John Clarke, Michel H. Devoret, and John M. Martinis received the prize  “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit.”

The work was published in three papers in PRL in 1984 and 1985. The New York Times has a nice discussion of the award, including comments from Clarke, Martinis, Tony Leggett, and Steve Girvin.

There is some rich, subtle, and beautiful physics here. As a theorist, I comment on the conceptual and theoretical side, but don't want to minimise that doing the experiments was a technical breakthrough.

The experiments were directly stimulated by Tony Leggett, who, beginning in the late 70s, championed the idea that Josephson junctions and SQIDs could be used to test whether quantum mechanics was valid at the macroscopic level. Many in the quantum foundations community were sceptical. Leggett and Amir Caldeira, performed some beautiful, concrete, realistic calculations of the effect of decoherence and dissipation on quantum tunneling in SQUIDs. The results suggested that macroscopic tunneling should be observable.

Aside: Leggett rightly received a Nobel in 2003 for his work on the theory of superfluid 3He. Nevertheless, I believe his work on quantum foundations is even more significant.

Subtle point 1. What do we mean by a macroscopic quantum state?

It is commonly said that superconductors and superfluids are in a macroscopic quantum state. Signatures are the quantisation of magnetic flux in a superconducting cylinder and how the current through a Josephson junction oscillates as a function of the magnetic flux through the junction. I discuss this in the chapter on Quantum Matter in my Very Short Introduction.

Leggett argued that these experiments are explained by the Josephson equations, which treat the phase of the superconducting order parameter as a classical variable. For example, in a SQUID, it satisfies a classical dynamical equation. 

If the state is truly quantum, then the phase variable should be quantised.

Aside: a nice microscopic derivation, starting from BCS theory and using path integrals, of the effective action to describe the quantum dynamics was given in 1982 by Vinay Ambegaokar, Ulrich Eckern, Gerd Schön

Subtle point 2. There are different signatures of quantum theory: energy level quantisation, tunnelling, coherence (interference), and entanglement.

In 1984-5, Clarke, DeVoret, and Martinis observed the first two. Macroscopic quantum coherence is harder to detect and was only observed in 2000. 

In a nice autobiographical article
Leggett commented in 2020,
Because of the strong prejudice in the quantum foundations community that it would never be possible to demonstrate characteristically quantum-mechanical effects at the macroscopic level, this assertion made us [Leggett and Garg, 1985] the target of repeated critical comments over the next few years. Fortunately, our experimental colleagues were more open-minded, and several groups started working toward a meaningful experiment along the lines we had suggested, resulting in the first demonstrations (29, 30) of MQC [Macroscopic Quantum Coherence] in rf SQUIDs (by then rechristened flux qubits) at the turn of the century. However, it would not be until 2016 that an experiment along the lines we had suggested (actually using a rather simpler protocol than our original one) was carried out (31) and, to my mind, definitively refuted macrorealism at that level.  
I find it rather amusing that nowadays the younger generation of experimentalists in the superconducting qubit area blithely writes papers with words like “artificial atom” in their titles, apparently unconscious of how controversial that claim once was.

Two final comments on the sociology side.

Superconductivity and superfluidity have now been the basis for Nobel Prizes in six years and four years, respectively.

The most widely cited of the three PRLs that were the basis of the Prize is the one on quantum tunnelling with about 500 citations on Google Scholar. (In contrast, Devoret has more than 20 other papers that are more widely cited). From 1986 to 1992 it was cited about a dozen times per year. Between 1993 and 2001 is was only cited a total of 30 times. Since, 2001 is has been cited about 20 times per year.

This is just one more example of how citation rates are a poor measure of the significance of work and a predictor of future success.

Monday, October 6, 2025

Nobel Prize predictions for 2025

 This week Nobel Prizes will be announced. I have not done predictions since 2020. This is a fun exercise. It is also good to reflect on what has been achieved, including outside our own areas, and big advances from the past we may now take for granted.

Before writing this I looked at suggestions from readers of Doug Natelson's blog, nanoscale views, an article in Physics World, predictions from Clarivate based on citations, and recent recipients of the Wolf Prize.

Please enter you own predictions below.

Although we know little about how the process actually works or the explicit criteria used, I have a few speculative suggestions and observations.

1. The Wolf Prize is often a precursor.

2. Every now and then, they seem to surprise us.

3. Every few years, the physics committee seems to go for something technological, sometimes arguably outside physics, perhaps to remind people how important physics is to modern technology and other areas of science.

4. They seem to spread the awards around between different areas of physics.

5. Theory only gets awards when it has led to well-established experimental observations. Brilliant theoretical discoveries that motivate large research enterprises (more theory and experimental searches) are good enough. This is why predictions based on citation numbers may be misleading.

6. Once an award has been made on one topic, it is unlikely that there will be another award for a long time, if ever, on that same topic. In other words, there is a high bar for a second award.

7. I don't think the logic is to pick an important topic and then choose who should get the prize for the topic. This approach works against topics where many researchers independently made contributions that were all important. The awardee needs to be a standout who won't be a debatable choice.

What do you think of these principles?

For some of the above reasons, I discuss below why I am sceptical about some specific predictions.

My top prediction for physics is Metamaterials with negative refractive index, going to John Pendry (theory) and David Smith (experiment). This is a topic I know little about.

Is it just a matter of time before twisted bilayer graphene wins a prize? This might go to Allan MacDonald (theory) and Pablo Jarillo-Herrero (experiment). They recently received a Wolf Prize. One thing that convinced me of the importance of this discovery was a preprint on moiré WSe2 with beautiful phase diagrams such as this one.


The level of control is truly amazing. Helpful background is the recent Physics Today article by Bernevig and Efetov.

This is big enough to overcome 6. and the earlier prize for graphene.

Unfortunately, my past prediction/wish of Kondo and heavy fermions won't happen as Jun Kondo died in 2022. This suggestion also always went against Principle 6, with the award to Ken Wilson citing his solution of the Kondo problem.

The prediction of Berry and Aharonov for topological phases in quantum mechanics is reasonable, except for questions about historical precursors.

The prediction of topological insulators is going against 6. and the award to Haldane in 2016.

Clarivate's predictions of DiVincenzo and Loss (for qubits based on electron spin in quantum dots) goes against 5. and 7. It is just one of many competing proposals for a scaleable quantum computer and a large-scale device is still elusive.

Predictions of a prize for quantum algorithms (Shor, Deutsch, Brassard, Bennett) go against 5. 

Chemistry 

I don't know enough chemistry to make meaningful predictions. On the other hand, in 2019 I did correctly predicted John Goodenough for lithium batteries.  I do like the prediction from Clarivate for Biomolecular condensates (Brangwynne, Hyman, and Rosen). I discussed them briefly in my review article on emergence.

What do you think about my 7 "principles"?

What are your predictions?

Tuesday, September 30, 2025

Elastic frustration in molecular crystals

Crystals of large molecules exhibit diverse structures. In other words, the geometric arrangements of the molecules relative to one another are complex. Given a specific molecule, theoretically predicting its crystal structure is a challenge and is the basis of a competition.

One of the reasons the structures are rich and the theoretical problem is so challenging is that there are typically many different interactions between different molecules, including electrostatic, hydrogen bonding, pi-pi,...

Another challenge is to understand the elastic and plastic properties of the crystals.

Some of my UQ colleagues recently published a paper that highlights some of the complexity.

Origins of elasticity in molecular materials

Amy J. Thompson, Bowie S. K. Chong, Elise P. Kenny, Jack D. Evans, Joshua A. Powell, Mark A. Spackman, John C. McMurtrie, Benjamin J. Powell, and Jack K. Clegg

They used calculations based on Density Functional Theory (DFT) to separate the contributions to the elasticity from the different interactions between the molecules. The figure below shows the three dominant interactions in the family of crystals that they consider.

The figure below shows the energy of interaction between a pair of molecules for the different interactions.
Note the purple vertical bar, which is the value of the coordinate in the equilibrium geometry of the whole crystal. The width of the bar represents variations in both lengths that occur in typical elastic experiments.
What is striking to me is the large difference between the positions of the potential minima for the individual interactions and the minima for the combined interactions.

This is an example of frustration: it is not possible to simultaneously minimise the energy of all the individual pairwise interactions. They are competing with one another.

A toy model illustrates the essential physics. I came up with this model partly motivated by similar physics that occurs in "spin-crossover" materials.


The upper (lower) spring has equilibrium length a (b) and spring constant k (k'). In the harmonic approximation, the total elastic energy is

The equilibrium separation of the two molecules is given by

which is intermediate between a + 2R and b. This illustrates the elastic frustration. Neither of the springs (bonds) is at its optimum length.

The system is stable provided that k + k' is positive. Thus, it is not necessary that both k and k' be positive. The possibility that one of the k's is negative is relevant to reality. Thompson et al. showed that the individual molecular interaction energies are described by Morse potentials. If one is far enough from the minimum of the potential, the local curvature can be negative. 

Monday, September 22, 2025

Turbulent flows in active matter

The power of toy models and effective theories in describing and understanding emergent phenomena is illustrated by a 2012 study of the turbulence in the fluid flow of swimming bacteria.

Meso-scale turbulence in living fluids

Henricus H. Wensink, Jörn Dunkel, Sebastian Heidenreich, Knut Drescher, Raymond E. Goldstein, Hartmut Löwen, and Julia M. Yeomans

They found that a qualitative and quantitative description of observations of flow patterns, energy spectra, and velocity structure functions was given by a toy model of self-propelled rods (similar to that proposed for flocking of birds) and a minimal continuum model for incompressible flow. For the toy model, they presented a phase diagram (shown below) as a function of the volume fraction of the fluid occupied by rods and the aspect ratio of the rods. There were six distinct phases: dilute state (D), jamming (J), swarming (S), bionematic (B), turbulent (T), and laned (L). The turbulent state occurs for high filling fractions and intermediate aspect ratios, covering typical values for bacteria.


The horizontal axis is the volume fraction, going from 0 to 1.

The figure below compares the experimental data (top right) for the vorticity and the toy model (lower left) and the continuum model (lower right).

Regarding this work, Tom McLeish highlighted the importance of the identification of the relevant mesoscopic scale and the power of toy models and effective theories in the following beautiful commentary taken from his book, The Poetry and Music of Science

“Individual ‘bacteria’ are represented in this simulation by simple rod-like structures that possess just the two properties of mutual repulsion, and the exertion of a constant swimming force along their own length. The rest is simply calculation of the consequences. No more detailed account than this is taken of the complexities within a bacterium. It is somewhat astonishing that a model of the intermediate elemental structures, on such parsimonious lines, is able to reproduce the complex features of the emergent flow structure. 

Impossible to deduce inductively the salient features of the underlying physics from the fluid flow alone—creative imagination and a theoretical scalpel are required: the first to create a sufficient model of reality at the underlying and unseen scale; the second to whittle away at its rough and over-ornate edges until what is left is the streamlined and necessary model. To ‘understand’ the turbulent fluid is to have identified the scale and structure of its origins. To look too closely is to be confused with unnecessary small detail, too coarsely and there is simply an echo of unexplained patterns.”

Thursday, September 18, 2025

Confusing bottom-up and top-down approaches to emergence


Due to emergence, reality is stratified. This is reflected in the existence of semi-autonomous scientific disciplines and subdisciplines. A major goal is to understand the relationship between different strata. For example, how is chemistry related to physics? How is genetics related to cell biology?

Before describing two alternative approaches —top-down and bottom-up —I need to point out that in different fields, these terms are used in opposite senses. That can be confusing!

In the latest version of my review article on emergence, I employ the same terminology traditionally used in condensed matter physics, chemistry, and biology. It is also consistent with the use of the term “downward causation” in philosophy. 

Top-down means going from long-distance scales to short-distance scales, i.e., going down in the diagrams shown in the figure above. In contrast, in the quantum field theory of elementary particles and fields (high-energy physics), “top-down” means the opposite, i.e., going from short to long distance length scales. This is because practitioners in that field tend to draw diagrams with high energies at the top and low energies at the bottom.

Bottom-up approaches aim to answer the question: how do properties observed at the macroscale emerge from the microscopic properties of the system? 
History suggests that this question may often be best addressed by identifying the relevant mesoscale at which modularity is observed and connecting the micro- to the meso- and connecting the meso- to the macro. For example, high-energy degrees of freedom can be "integrated out" to give an effective theory for the low-energy degrees of freedom.

Top-down approaches try to surmise something about the microscopic from the macroscopic. This has a long and fruitful history, albeit probably with many false starts that we may not hear about, unless we live through them or read history books. Kepler's snowflakes are an early example. Before people were completely convinced of the existence of atoms, the study of crystal facets and of Brownian motion provided hints of the atomic structure of matter. Planck deduced the existence of the quantum from the thermodynamics of black-body radiation, i.e. from macroscopic properties. Arguably, the first definitive determination of Avogadro's number was from Perrin's experiments on Brownian motion, which involved mesoscopic measurements. Comparing classical statistical mechanics to bulk thermodynamic properties gave hints of an underlying quantum structure to reality. The Sackur-Tetrode equation for the entropy of an ideal gas hinted at the quantisation of phase space. The Gibbs paradox hinted that fundamental particles are indistinguishable. The third law of thermodynamics hints at quantum degeneracy. Pauling’s proposal for the structure of ice was based on macroscopic measurements of its residual entropy. Pasteur deduced the chirality of molecules from observations of the facets in crystals of tartaric acid. Sometimes a “top-down” approach means one that focuses on the meso-scale and ignores microscopic details.

The top-down and bottom-up approaches should not be seen as exclusive or competitive, but rather complementary. Their relative priority or feasibility depends on the system of interest and the amount of information and techniques available to an investigator. Coleman has discussed the interplay of emergence and reductionism in condensed matter. In biology, Mayr advocated a “dual level of analysis” for organisms. In social science, Schelling discussed the interplay of the behaviour of individuals and the properties of social aggregates. In a classic study of complex organisations in business, understanding this interplay was termed differentiation and integration.

I thank Jeremy Schmit for requesting clarification of this terminology.

Friday, September 12, 2025

The role of superconductivity in development of the Standard Model

In 1986, Steven Weinberg published an article, Superconductivity for Particular Theorists, in which he stated

"No one did more than Nambu to bring the idea of spontaneously broken symmetries to the attention of elementary particle physicists. And, as he acknowledged in his ground-breaking 1960 article  "Axial Current Conservation in Weak Interactions'', Nambu was guided in this work by an analogy with the theory of superconductivity,..."

In the 1960 PRL, referenced by Weinberg, Nambu states that in the BCS theory, as refined by Bogoliubov, [and Anderson]

"gauge invariance, the energy gap, and the collective excitations are logically related to each other as was shown by the author. [Y. Nambu, Phys. Rev. 117, 648 (1960)] In the present case we have only to replace them by (chiral) (gamma_5) invariance, baryon mass, and the mesons." 

This connection is worked out explicitly in two papers in 1961. The first is
Y. Nambu and G. Jona-Lasinio

They acknowledge, 

"that the model treated here is not realistic enough to be compared with the actual nucleon problem. Our purpose was to show that a new possibility exists for field theory to be richer and more complex than has been hitherto envisaged,"

Hence, I consider this to be a toy model for an emergent phenomena.


The model consists of a massless fermion field with a quartic interaction that has chiral invariance, i.e., unchanged by global gauge transformations associated with the gamma_5 matrix. (The Lagrangian is given above.) At the mean-field level, this symmetry is broken. Excitations include massless bosons (associated with the symmetry breaking and similar to those found earlier by Goldstone) and bound fermion pairs. It was conjectured that these could be analogues of mesons and baryons, respectively. The model was proposed before quarks and QCD. Now, the fermion degrees of freedom would be identified with quarks, and the model illustrates the dynamical generation of quark masses. When generalised to include SU(2) or SU(3) symmetry the model is considered to be an effective field theory for QCD, such as chiral effective theory.

Monday, September 8, 2025

Multi-step spin-state transitions in organometallics and frustrated antiferromagnetic Ising models

In previous posts, I discussed how "spin-crossover" material is a misnomer because many of these materials do not undergo crossovers but phase transitions due to collective effects. Furthermore, they exhibit rich behaviours, including hysteresis, incomplete transitions, and multiple-step transitions. Ising models can capture some of these effects.

Here, I discuss how an antiferromagnetic Ising model with frustrated interactions can give multi-step transitions. This has been studied previously by Paez-Espejo, Sy and Boukheddaden, and my UQ colleagues Jace Cruddas and Ben Powell. In their case, they start with a lattice "balls and spring" model and derive Ising models with an infinite-range ferromagnetic interaction and short-range antiferromagnetic interactions. They show that when the range of these interactions (and thus the frustration) is increased, more and more steps are observed.

Here, I do something simpler to illustrate some key physics and some subtleties and cautions.

fcc lattice

Consider the antiferromagnetic Ising model on the face-centred-cubic lattice in a magnetic field. 

[Historical trivia: the model was studied by William Shockley back in 1938, in the context of understanding alloys of gold and copper.]

The picture below shows a tetrahedron of four nearest neighbours in the fcc lattice.

Even with just nearest-neighbour interactions, the lattice is frustrated. On a tetrahedron, you cannot satisfy all six AFM interactions. Four bonds are satisfied, and two are unsatisfied.

The phase diagram of the model was studied using Monte Carlo by Kammerer et al. in 1996. It is shown above as a function of temperature and field. All the transition lines are (weakly) first-order.

The AB phase has AFM order within the [100] planes. It has an equal number of up and down spins.

The A3B phase has alternating FM and AFM order between neighbouring planes. Thus, 3/4 of the spins have the same direction as the magnetic field.

The stability of these ordered states is subtle. At zero temperature, both the AB and A3B states are massively degenerate. For a system of 4 x L^3 spins, there are 3 x 2^2L AB states, and 6 x 2^L   A3B states. At finite temperature, the system exhibits “order by disorder”.

On the phase diagram, I have shown three straight lines (blue, red, and dashed-black) representing a temperature sweep for three different spin-crossover systems. The "field" is given by h=1/2(Delta H - T Delta S). In the lower panel, I have shown the temperature dependence of the High Spin (HS) population for the three different systems. For clarity, I have not shown the effects of the hysteresis associated with the first-order transitions.

If Delta H is smaller than the values shown in the figure, then at low temperatures, the spin-crossover system will never reach the complete low-spin state.

Main points.

Multiple steps are possible even in a simple model. This is because frustration stabilises new phases in a magnetic field. Similar phenomena occur in other frustrated models, such as the triangular lattice, the J1-J2 model on a chain or a square lattice.

The number of steps may change depending on Delta S. This is because a temperature sweep traverses the field-temperature phase diagram asymmetrically.

Caution.

Fluctuations matter.
The mean-field theory phase diagram was studied by Beath and Ryan. Their phase diagram is below. Clearly, there are significant qualitative differences, particularly in the stability of the A3B phase.
The transition temperature at zero field is 3.5 J, compared to the value of 1.4J from Monte Carlo.


Monte Carlo simulations may be fraught.
Because of the many competing ordered states associated with frustration, Kammerer et al. note that “in a Monte Carlo simulation one needs unusually large systems in order observe the correct asymptotic behaviour, and that the effect gets worse with decreasing temperature because of the proximity of the phase transition to the less ordered phase at T=0”. 

Open questions.

The example above hints at what the essential physics may be how frustrated Ising models may capture it. However, to definitively establish the connection with real materials, several issues need to be resolved.

1. Show definitively how elastic interactions can produce the necessary Ising interactions. In particular, derive a formula for the interactions in terms of elastic properties of the high-spin and low-spin states. How do their structural differences, and the associated bond stretches or compressions, affect the elastic energy? What is the magnitude, range, and direction of the interactions?

[n.b. Different authors have different expressions for the Ising interactions for a range of toy models, using a range of approximations. It also needs to be done for a general atomic "force field".]

2. For specific materials, calculate the Ising interactions from a DFT-based method. Then show that the relevant Ising model does produce the steps and hysteresis observed experimentally.


Tuesday, September 2, 2025

"Ferromagnetic" Ising models for spin-state transitions in organometallics

In recent posts, I discussed how "spin crossover" is a misnomer for the plethora of organometallic compounds that undergo spin-state phase transitions (abrupt, first-order, hysteretic, multi-step,...)

In theory development, it is best to start with the simplest possible model and then gradually add new features to the model until (hopefully) arriving at a minimal model that can describe (almost) everything. Hence, I described how the two-state model can describe spin crossover. An Ising "spin" has values of +1 or -1, corresponding to high spin (HS) and low spin (LS) states. The "magnetic" field is half of the difference in Gibbs free energy between the two states. 

The model predicts equal numbers of HS and LS at a temperature

The two-state model is modified by adding Ising-type interactions between the “spins” (molecules). The Hamiltonian is then of the form

 The temperature dependence in the field arises because this is an effective Hamiltonian.

The Ising-type interactions are due to elastic effects. The spin-state transition in the iron atom leads to changes in the Fe-N bond lengths (an increase of about 10 per cent in going from LS to HS), changing the size of the metal-ligand (ML6 ) complex. This affects the interactions (ionic, pi-pi, H-bond, van der Waals) between the complexes. The volume of the ML6 complex changes by about 30 per cent, but typically the volume of the crystal unit cell changes by only a few per cent. The associated relaxation energies are related to the J’s. Calculating them is non-trivial and will be discussed elsewhere. There are many competing and contradictory models for the elastic origin of the J’s.

In this post, I only consider nearest-neighbour ferromagnetic interactions. Later, I will consider antiferromagnetic interactions and further-neighbour interactions that lead to frustration. 

Slichter-Drickamer model

This model was introduced in 1972 is beloved by experimentalists, especially chemists, because it provides a simple analytic formula that can be fit to experimental data.

The system is assumed to be a thermodynamic mixture of HS and LS. x=n_HS(T) is the fraction of HS. The Gibbs free energy is given by

This is minimised as a function of x to give the temperature dependence of the HS population.

The model is a natural extension of the two-state model, by adding a single parameter, Gamma, which is sometimes referred to as the cooperativity parameter.

The model is equivalent to the mean-field treatment of a ferromagnetic Ising model, with Gamma=2zJ, where z is the number of nearest neighbours. Some chemists do not seem to be aware of this connection to Ising. The model is also identical to the theory of binary mixtures, such as discussed in Thermal Physics by Schroeder, Section 5.4.

Successes of the model.

good quantitative agreement with experiments on many materials.

a first-order transition with hysteresis for T_1/2 < Tc =z J.

a steep and continuous (abrupt) transition for T_1/2 slightly larger than Tc.

Values of Gamma are in the range 1-10 kJ/mol. Corresponding vaules of J are in the range 10-200 K, depending on what value of z is assumed.

Weaknesses of the model.

It cannot explain multi-step transitions.

Mean-field theory is quantitatively, and sometimes qualitatively, wrong, especially in one and two dimensions.

The description of hysteresis is an artefact of the mean-field theory, as discussed below.

Figure. Phase diagram of a ferromagnetic Ising model in a magnetic field. (Fig. 8.7.1, Chaikin and Lubensky). Vertical axis is the magnetic field, and the horizontal axis is temperature. Tc denotes the critical temperature, and the double-line denotes a first-order phase transition between paramagnetic phases where the magnetisation is parallel to the direction of the applied field.

Curves show the free energy as a function of the order parameter (magnetisation) in mean-field theory. The dashed lines are the lines of metastability deduced from these free-energy curves. Inside these lines, the free energy has two minima: the equilibrium one and a metastable one. The lines are sometimes referred to as spinodal curves.

The consequences of the metastability for a field sweep at constant temperature are shown in the Figure below, taken from Banerjee and Bar.

How does this relate to thermally induced spin-state transitions?

Consider the phase diagram shown above of a ferromagnetic Ising model in a magnetic field. The red and blue lines correspond to temperature scans for two SCO materials that have different values of the parameters Delta H and DeltaS.

The occurrence of qualitatively different behaviour is determined by where the lines intercept the temperature and field axes, i.e. the values of T_1/2 /J and Delta H/J. If the former is larger than Tc/J, as it is for the blue line, then no phase transition is observed. 

The parameter Delta H/J determines whether at low temperatures, the complete HS state is formed.

The figure below is a sketch of the temperature dependence of the population of HS for the red and blue cases.


Note that because of the non-zero slope of the red line, the temperature  T_1/2 is not the average of the temperatures at which the transition occurs on the up and down temperature sweeps.

Deconstructing hysteresis.

The physical picture above of metastability is an artefact (oversimplification) of mean-field theory. It predicts that an infinite system would take an infinite time to reach the equilibrium state from the metastable state.

(Aside: In the context of the corresponding discrete-choice models in economics, this has important and amusing consequences, as discussed by Bouchaud.)

In reality, the transition to the equilibrium state can occur via nucleation of finite domains or in some regimes via a perturbation with a non-zero wavevector. This is discussed in detail by Chaikin and Lubensky, chapter 4.

The consequence of this “metastability” for a first-order transition in an SCO system is that the width of the hysteresis region (in temperature) may depend on the rate at which the temperature is swept and whether the system is allowed to relax before the magnetisation (fraction of HS) is measured at any temperature. Emprically, this is observed and has been highlighted by Brooker, albeit without reference to the theoretical subtleties I am highlighting here. She points out that up to 2014, chemists seemed to have been oblivious to these issues and reported results without testing whether their observations depended on the sweep rate or whether they waited for relaxation.

(Aside. The dynamics are different for conserved and non-conserved order parameters. In a binary liquid mixture, the order parameter is conserved, i.e., the number of A and B atoms is fixed. In an SCO material, the number of HS and LS is not conserved.)

In the next post, I will discuss how an antiferromagnetic Ising model can give a two-step transition and models with frustrated interactions can give multi-step transitions.

Elastic interactions and complex patterns in binary systems

One of the many beauties of condensed matter physics is that it can reveal and illuminate how two systems or phenomena that at first appear ...