Saturday, October 25, 2025

Can AI solve quantum-many body problems?

I find it difficult to wade through all the hype about AI, along with the anecdotes about its failings to reliably answer basic questions.

Gerard Milburn kindly brought to my attention a nice paper that systematically addresses whether AI is useful as an aid (research assistant) for solving basic (but difficult) problems that researchers in condensed matter theorists care about.

CMT-Benchmark: A Benchmark for Condensed Matter Theory Built by Expert Researchers

The abstract is below.

My only comment is one of perspective. Is the cup half full or half empty? Do we emphasise the failures or the successes?

The optimists among us will claim that the success in solving a smaller number of these difficult problems shows the power and potential of AI. It is just a matter of time before LLMs can solve most of these problems, and we will see dramatic increases in research productivity (defined as the amount of time taken to complete a project).

The pessimists and skeptically oriented will claim that the failures highlight the limitations of AI, particularly when training data sets are small. We are still a long way from replacing graduate students with AI bots (or at least using AI to train students in the first year of their PhD).

What do you think? Should this study lead to optimism, pessimism, or just wait and see?

----------

Large language models (LLMs) have shown remarkable progress in coding and math problem-solving, but evaluation on advanced research-level problems in hard sciences remains scarce. To fill this gap, we present CMT-Benchmark, a dataset of 50 problems covering condensed matter theory (CMT) at the level of an expert researcher. Topics span analytical and computational approaches in quantum many-body, and classical statistical mechanics. The dataset was designed and verified by a panel of expert researchers from around the world. We built the dataset through a collaborative environment that challenges the panel to write and refine problems they would want a research assistant to solve, including Hartree-Fock, exact diagonalization, quantum/variational Monte Carlo, density matrix renormalization group (DMRG), quantum/classical statistical mechanics, and model building. We evaluate LLMs by programmatically checking solutions against expert-supplied ground truth. We developed machine-grading, including symbolic handling of non-commuting operators via normal ordering. They generalize across tasks too. Our evaluations show that frontier models struggle with all of the problems in the dataset, highlighting a gap in the physical reasoning skills of current LLMs. Notably, experts identified strategies for creating increasingly difficult problems by interacting with the LLMs and exploiting common failure modes. The best model, GPT5, solves 30\% of the problems; average across 17 models (GPT, Gemini, Claude, DeepSeek, Llama) is 11.4±2.1\%. Moreover, 18 problems are solved by none of the 17 models, and 26 by at most one. These unsolved problems span Quantum Monte Carlo, Variational Monte Carlo, and DMRG. Answers sometimes violate fundamental symmetries or have unphysical scaling dimensions. We believe this benchmark will guide development toward capable AI research assistants and tutors.

Monday, October 20, 2025

Undergraduates need to learn about the Ising model

A typical undergraduate course on statistical mechanics is arguably misleading because (unintentionally) it does not tell students several important things (related to one another).

Statistical mechanics is not just about how to calculate thermodynamic properties of a collection of non-interacting particles.

A hundred years ago, many physicists did not believe that statistical mechanics could describe phase transitions. Arguably, this lingering doubt only ended fifty years ago with Wilson's development of renormalisation group theory.

It is about emergence: how microscopic properties are related to macroscopic properties.

Leo Kadanoff commented, "Starting around 1925, a change occurred: With the work of Ising, statistical mechanics began to be used to describe the behaviour of many particles at once."

When I came to UQ 25 years ago, I taught PHYS3020 Statistical Mechanics a couple of times. To my shame, I never discussed the Ising model. There is a nice section on it in the course textbook, Thermal Physics: An Introduction, by Daniel Schroeder. I guess I did not think there was time to "fit it in" and back then, I did not appreciate how important the Ising model is. This was a mistake.

Things have changed for the better due to my colleagues Peter Jacobson and Karen Kheruntsyan. They now include one lecture on the model, and students complete a computational assignment in which they write a Monte Carlo code to simulate the model.

This year, I am giving the lecture on the model. Here are my slides  and what I will write on the whiteboard or document viewer in the lecture.

Friday, October 17, 2025

One hundred years of Ising

In 1925, Ising published his paper on the solution of the model in one dimension. An English translation is here.https://www.hs-augsburg.de/~harsch/anglica/Chronology/20thC/Ising/isi_fm00.html

Coincidentally, next week I am giving a lecture on the Ising model to an undergraduate class in statistical mechanics. To flesh out the significance and relevance of the model, here are some of the interesting articles I have been looking at:

The Ising model celebrates a century of interdisciplinary contributions, Michael W. Macy, Boleslaw K. Szymanski and Janusz A. Hołyst

This mostly discusses the relevance of the model to understanding basic problems in sociology, including its relation to the classic Schelling model for social segregation.

The Ising model: highlights and perspectives, Christof Külske

This mostly discusses how the model is central to some work in mathematical physics and probability theory.

The Fate of Ernst Ising and the Fate of his Model, Thomas Ising, Reinhard Folk, Ralph Kennac, Bertrand Berche, Yurij Holovatche.

This includes some nice memories of Ising from his son, Thomas.

Aside: I wanted a plot of the specific heat for the one-dimensional model. According to Google AI "In a 1D Ising model with no external magnetic field, the specific heat is zero at all temperatures."

Wednesday, October 8, 2025

2025 Nobel Prize in Physics: Macroscopic quantum effects

John Clarke, Michel H. Devoret, and John M. Martinis received the prize  “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit.”

The work was published in three papers in PRL in 1984 and 1985. The New York Times has a nice discussion of the award, including comments from Clarke, Martinis, Tony Leggett, and Steve Girvin.

There is some rich, subtle, and beautiful physics here. As a theorist, I comment on the conceptual and theoretical side, but don't want to minimise that doing the experiments was a technical breakthrough.

The experiments were directly stimulated by Tony Leggett, who, beginning in the late 70s, championed the idea that Josephson junctions and SQIDs could be used to test whether quantum mechanics was valid at the macroscopic level. Many in the quantum foundations community were sceptical. Leggett and Amir Caldeira, performed some beautiful, concrete, realistic calculations of the effect of decoherence and dissipation on quantum tunneling in SQUIDs. The results suggested that macroscopic tunneling should be observable.

Aside: Leggett rightly received a Nobel in 2003 for his work on the theory of superfluid 3He. Nevertheless, I believe his work on quantum foundations is even more significant.

Subtle point 1. What do we mean by a macroscopic quantum state?

It is commonly said that superconductors and superfluids are in a macroscopic quantum state. Signatures are the quantisation of magnetic flux in a superconducting cylinder and how the current through a Josephson junction oscillates as a function of the magnetic flux through the junction. I discuss this in the chapter on Quantum Matter in my Very Short Introduction.

Leggett argued that these experiments are explained by the Josephson equations, which treat the phase of the superconducting order parameter as a classical variable. For example, in a SQUID, it satisfies a classical dynamical equation. 

If the state is truly quantum, then the phase variable should be quantised.

Aside: a nice microscopic derivation, starting from BCS theory and using path integrals, of the effective action to describe the quantum dynamics was given in 1982 by Vinay Ambegaokar, Ulrich Eckern, Gerd Schön

Subtle point 2. There are different signatures of quantum theory: energy level quantisation, tunnelling, coherence (interference), and entanglement.

In 1984-5, Clarke, DeVoret, and Martinis observed the first two. Macroscopic quantum coherence is harder to detect and was only observed in 2000. 

In a nice autobiographical article
Leggett commented in 2020,
Because of the strong prejudice in the quantum foundations community that it would never be possible to demonstrate characteristically quantum-mechanical effects at the macroscopic level, this assertion made us [Leggett and Garg, 1985] the target of repeated critical comments over the next few years. Fortunately, our experimental colleagues were more open-minded, and several groups started working toward a meaningful experiment along the lines we had suggested, resulting in the first demonstrations (29, 30) of MQC [Macroscopic Quantum Coherence] in rf SQUIDs (by then rechristened flux qubits) at the turn of the century. However, it would not be until 2016 that an experiment along the lines we had suggested (actually using a rather simpler protocol than our original one) was carried out (31) and, to my mind, definitively refuted macrorealism at that level.  
I find it rather amusing that nowadays the younger generation of experimentalists in the superconducting qubit area blithely writes papers with words like “artificial atom” in their titles, apparently unconscious of how controversial that claim once was.

Two final comments on the sociology side.

Superconductivity and superfluidity have now been the basis for Nobel Prizes in six years and four years, respectively.

The most widely cited of the three PRLs that were the basis of the Prize is the one on quantum tunnelling with about 500 citations on Google Scholar. (In contrast, Devoret has more than 20 other papers that are more widely cited). From 1986 to 1992 it was cited about a dozen times per year. Between 1993 and 2001 is was only cited a total of 30 times. Since, 2001 is has been cited about 20 times per year.

This is just one more example of how citation rates are a poor measure of the significance of work and a predictor of future success.

Monday, October 6, 2025

Nobel Prize predictions for 2025

 This week Nobel Prizes will be announced. I have not done predictions since 2020. This is a fun exercise. It is also good to reflect on what has been achieved, including outside our own areas, and big advances from the past we may now take for granted.

Before writing this I looked at suggestions from readers of Doug Natelson's blog, nanoscale views, an article in Physics World, predictions from Clarivate based on citations, and recent recipients of the Wolf Prize.

Please enter you own predictions below.

Although we know little about how the process actually works or the explicit criteria used, I have a few speculative suggestions and observations.

1. The Wolf Prize is often a precursor.

2. Every now and then, they seem to surprise us.

3. Every few years, the physics committee seems to go for something technological, sometimes arguably outside physics, perhaps to remind people how important physics is to modern technology and other areas of science.

4. They seem to spread the awards around between different areas of physics.

5. Theory only gets awards when it has led to well-established experimental observations. Brilliant theoretical discoveries that motivate large research enterprises (more theory and experimental searches) are good enough. This is why predictions based on citation numbers may be misleading.

6. Once an award has been made on one topic, it is unlikely that there will be another award for a long time, if ever, on that same topic. In other words, there is a high bar for a second award.

7. I don't think the logic is to pick an important topic and then choose who should get the prize for the topic. This approach works against topics where many researchers independently made contributions that were all important. The awardee needs to be a standout who won't be a debatable choice.

What do you think of these principles?

For some of the above reasons, I discuss below why I am sceptical about some specific predictions.

My top prediction for physics is Metamaterials with negative refractive index, going to John Pendry (theory) and David Smith (experiment). This is a topic I know little about.

Is it just a matter of time before twisted bilayer graphene wins a prize? This might go to Allan MacDonald (theory) and Pablo Jarillo-Herrero (experiment). They recently received a Wolf Prize. One thing that convinced me of the importance of this discovery was a preprint on moiré WSe2 with beautiful phase diagrams such as this one.


The level of control is truly amazing. Helpful background is the recent Physics Today article by Bernevig and Efetov.

This is big enough to overcome 6. and the earlier prize for graphene.

Unfortunately, my past prediction/wish of Kondo and heavy fermions won't happen as Jun Kondo died in 2022. This suggestion also always went against Principle 6, with the award to Ken Wilson citing his solution of the Kondo problem.

The prediction of Berry and Aharonov for topological phases in quantum mechanics is reasonable, except for questions about historical precursors.

The prediction of topological insulators is going against 6. and the award to Haldane in 2016.

Clarivate's predictions of DiVincenzo and Loss (for qubits based on electron spin in quantum dots) goes against 5. and 7. It is just one of many competing proposals for a scaleable quantum computer and a large-scale device is still elusive.

Predictions of a prize for quantum algorithms (Shor, Deutsch, Brassard, Bennett) go against 5. 

Chemistry 

I don't know enough chemistry to make meaningful predictions. On the other hand, in 2019 I did correctly predicted John Goodenough for lithium batteries.  I do like the prediction from Clarivate for Biomolecular condensates (Brangwynne, Hyman, and Rosen). I discussed them briefly in my review article on emergence.

What do you think about my 7 "principles"?

What are your predictions?

Tuesday, September 30, 2025

Elastic frustration in molecular crystals

Crystals of large molecules exhibit diverse structures. In other words, the geometric arrangements of the molecules relative to one another are complex. Given a specific molecule, theoretically predicting its crystal structure is a challenge and is the basis of a competition.

One of the reasons the structures are rich and the theoretical problem is so challenging is that there are typically many different interactions between different molecules, including electrostatic, hydrogen bonding, pi-pi,...

Another challenge is to understand the elastic and plastic properties of the crystals.

Some of my UQ colleagues recently published a paper that highlights some of the complexity.

Origins of elasticity in molecular materials

Amy J. Thompson, Bowie S. K. Chong, Elise P. Kenny, Jack D. Evans, Joshua A. Powell, Mark A. Spackman, John C. McMurtrie, Benjamin J. Powell, and Jack K. Clegg

They used calculations based on Density Functional Theory (DFT) to separate the contributions to the elasticity from the different interactions between the molecules. The figure below shows the three dominant interactions in the family of crystals that they consider.

The figure below shows the energy of interaction between a pair of molecules for the different interactions.
Note the purple vertical bar, which is the value of the coordinate in the equilibrium geometry of the whole crystal. The width of the bar represents variations in both lengths that occur in typical elastic experiments.
What is striking to me is the large difference between the positions of the potential minima for the individual interactions and the minima for the combined interactions.

This is an example of frustration: it is not possible to simultaneously minimise the energy of all the individual pairwise interactions. They are competing with one another.

A toy model illustrates the essential physics. I came up with this model partly motivated by similar physics that occurs in "spin-crossover" materials.


The upper (lower) spring has equilibrium length a (b) and spring constant k (k'). In the harmonic approximation, the total elastic energy is

The equilibrium separation of the two molecules is given by

which is intermediate between a + 2R and b. This illustrates the elastic frustration. Neither of the springs (bonds) is at its optimum length.

The system is stable provided that k + k' is positive. Thus, it is not necessary that both k and k' be positive. The possibility that one of the k's is negative is relevant to reality. Thompson et al. showed that the individual molecular interaction energies are described by Morse potentials. If one is far enough from the minimum of the potential, the local curvature can be negative. 

Monday, September 22, 2025

Turbulent flows in active matter

The power of toy models and effective theories in describing and understanding emergent phenomena is illustrated by a 2012 study of the turbulence in the fluid flow of swimming bacteria.

Meso-scale turbulence in living fluids

Henricus H. Wensink, Jörn Dunkel, Sebastian Heidenreich, Knut Drescher, Raymond E. Goldstein, Hartmut Löwen, and Julia M. Yeomans

They found that a qualitative and quantitative description of observations of flow patterns, energy spectra, and velocity structure functions was given by a toy model of self-propelled rods (similar to that proposed for flocking of birds) and a minimal continuum model for incompressible flow. For the toy model, they presented a phase diagram (shown below) as a function of the volume fraction of the fluid occupied by rods and the aspect ratio of the rods. There were six distinct phases: dilute state (D), jamming (J), swarming (S), bionematic (B), turbulent (T), and laned (L). The turbulent state occurs for high filling fractions and intermediate aspect ratios, covering typical values for bacteria.


The horizontal axis is the volume fraction, going from 0 to 1.

The figure below compares the experimental data (top right) for the vorticity and the toy model (lower left) and the continuum model (lower right).

Regarding this work, Tom McLeish highlighted the importance of the identification of the relevant mesoscopic scale and the power of toy models and effective theories in the following beautiful commentary taken from his book, The Poetry and Music of Science

“Individual ‘bacteria’ are represented in this simulation by simple rod-like structures that possess just the two properties of mutual repulsion, and the exertion of a constant swimming force along their own length. The rest is simply calculation of the consequences. No more detailed account than this is taken of the complexities within a bacterium. It is somewhat astonishing that a model of the intermediate elemental structures, on such parsimonious lines, is able to reproduce the complex features of the emergent flow structure. 

Impossible to deduce inductively the salient features of the underlying physics from the fluid flow alone—creative imagination and a theoretical scalpel are required: the first to create a sufficient model of reality at the underlying and unseen scale; the second to whittle away at its rough and over-ornate edges until what is left is the streamlined and necessary model. To ‘understand’ the turbulent fluid is to have identified the scale and structure of its origins. To look too closely is to be confused with unnecessary small detail, too coarsely and there is simply an echo of unexplained patterns.”

Thursday, September 18, 2025

Confusing bottom-up and top-down approaches to emergence


Due to emergence, reality is stratified. This is reflected in the existence of semi-autonomous scientific disciplines and subdisciplines. A major goal is to understand the relationship between different strata. For example, how is chemistry related to physics? How is genetics related to cell biology?

Before describing two alternative approaches —top-down and bottom-up —I need to point out that in different fields, these terms are used in opposite senses. That can be confusing!

In the latest version of my review article on emergence, I employ the same terminology traditionally used in condensed matter physics, chemistry, and biology. It is also consistent with the use of the term “downward causation” in philosophy. 

Top-down means going from long-distance scales to short-distance scales, i.e., going down in the diagrams shown in the figure above. In contrast, in the quantum field theory of elementary particles and fields (high-energy physics), “top-down” means the opposite, i.e., going from short to long distance length scales. This is because practitioners in that field tend to draw diagrams with high energies at the top and low energies at the bottom.

Bottom-up approaches aim to answer the question: how do properties observed at the macroscale emerge from the microscopic properties of the system? 
History suggests that this question may often be best addressed by identifying the relevant mesoscale at which modularity is observed and connecting the micro- to the meso- and connecting the meso- to the macro. For example, high-energy degrees of freedom can be "integrated out" to give an effective theory for the low-energy degrees of freedom.

Top-down approaches try to surmise something about the microscopic from the macroscopic. This has a long and fruitful history, albeit probably with many false starts that we may not hear about, unless we live through them or read history books. Kepler's snowflakes are an early example. Before people were completely convinced of the existence of atoms, the study of crystal facets and of Brownian motion provided hints of the atomic structure of matter. Planck deduced the existence of the quantum from the thermodynamics of black-body radiation, i.e. from macroscopic properties. Arguably, the first definitive determination of Avogadro's number was from Perrin's experiments on Brownian motion, which involved mesoscopic measurements. Comparing classical statistical mechanics to bulk thermodynamic properties gave hints of an underlying quantum structure to reality. The Sackur-Tetrode equation for the entropy of an ideal gas hinted at the quantisation of phase space. The Gibbs paradox hinted that fundamental particles are indistinguishable. The third law of thermodynamics hints at quantum degeneracy. Pauling’s proposal for the structure of ice was based on macroscopic measurements of its residual entropy. Pasteur deduced the chirality of molecules from observations of the facets in crystals of tartaric acid. Sometimes a “top-down” approach means one that focuses on the meso-scale and ignores microscopic details.

The top-down and bottom-up approaches should not be seen as exclusive or competitive, but rather complementary. Their relative priority or feasibility depends on the system of interest and the amount of information and techniques available to an investigator. Coleman has discussed the interplay of emergence and reductionism in condensed matter. In biology, Mayr advocated a “dual level of analysis” for organisms. In social science, Schelling discussed the interplay of the behaviour of individuals and the properties of social aggregates. In a classic study of complex organisations in business, understanding this interplay was termed differentiation and integration.

I thank Jeremy Schmit for requesting clarification of this terminology.

Friday, September 12, 2025

The role of superconductivity in development of the Standard Model

In 1986, Steven Weinberg published an article, Superconductivity for Particular Theorists, in which he stated

"No one did more than Nambu to bring the idea of spontaneously broken symmetries to the attention of elementary particle physicists. And, as he acknowledged in his ground-breaking 1960 article  "Axial Current Conservation in Weak Interactions'', Nambu was guided in this work by an analogy with the theory of superconductivity,..."

In the 1960 PRL, referenced by Weinberg, Nambu states that in the BCS theory, as refined by Bogoliubov, [and Anderson]

"gauge invariance, the energy gap, and the collective excitations are logically related to each other as was shown by the author. [Y. Nambu, Phys. Rev. 117, 648 (1960)] In the present case we have only to replace them by (chiral) (gamma_5) invariance, baryon mass, and the mesons." 

This connection is worked out explicitly in two papers in 1961. The first is
Y. Nambu and G. Jona-Lasinio

They acknowledge, 

"that the model treated here is not realistic enough to be compared with the actual nucleon problem. Our purpose was to show that a new possibility exists for field theory to be richer and more complex than has been hitherto envisaged,"

Hence, I consider this to be a toy model for an emergent phenomena.


The model consists of a massless fermion field with a quartic interaction that has chiral invariance, i.e., unchanged by global gauge transformations associated with the gamma_5 matrix. (The Lagrangian is given above.) At the mean-field level, this symmetry is broken. Excitations include massless bosons (associated with the symmetry breaking and similar to those found earlier by Goldstone) and bound fermion pairs. It was conjectured that these could be analogues of mesons and baryons, respectively. The model was proposed before quarks and QCD. Now, the fermion degrees of freedom would be identified with quarks, and the model illustrates the dynamical generation of quark masses. When generalised to include SU(2) or SU(3) symmetry the model is considered to be an effective field theory for QCD, such as chiral effective theory.

Monday, September 8, 2025

Multi-step spin-state transitions in organometallics and frustrated antiferromagnetic Ising models

In previous posts, I discussed how "spin-crossover" material is a misnomer because many of these materials do not undergo crossovers but phase transitions due to collective effects. Furthermore, they exhibit rich behaviours, including hysteresis, incomplete transitions, and multiple-step transitions. Ising models can capture some of these effects.

Here, I discuss how an antiferromagnetic Ising model with frustrated interactions can give multi-step transitions. This has been studied previously by Paez-Espejo, Sy and Boukheddaden, and my UQ colleagues Jace Cruddas and Ben Powell. In their case, they start with a lattice "balls and spring" model and derive Ising models with an infinite-range ferromagnetic interaction and short-range antiferromagnetic interactions. They show that when the range of these interactions (and thus the frustration) is increased, more and more steps are observed.

Here, I do something simpler to illustrate some key physics and some subtleties and cautions.

fcc lattice

Consider the antiferromagnetic Ising model on the face-centred-cubic lattice in a magnetic field. 

[Historical trivia: the model was studied by William Shockley back in 1938, in the context of understanding alloys of gold and copper.]

The picture below shows a tetrahedron of four nearest neighbours in the fcc lattice.

Even with just nearest-neighbour interactions, the lattice is frustrated. On a tetrahedron, you cannot satisfy all six AFM interactions. Four bonds are satisfied, and two are unsatisfied.

The phase diagram of the model was studied using Monte Carlo by Kammerer et al. in 1996. It is shown above as a function of temperature and field. All the transition lines are (weakly) first-order.

The AB phase has AFM order within the [100] planes. It has an equal number of up and down spins.

The A3B phase has alternating FM and AFM order between neighbouring planes. Thus, 3/4 of the spins have the same direction as the magnetic field.

The stability of these ordered states is subtle. At zero temperature, both the AB and A3B states are massively degenerate. For a system of 4 x L^3 spins, there are 3 x 2^2L AB states, and 6 x 2^L   A3B states. At finite temperature, the system exhibits “order by disorder”.

On the phase diagram, I have shown three straight lines (blue, red, and dashed-black) representing a temperature sweep for three different spin-crossover systems. The "field" is given by h=1/2(Delta H - T Delta S). In the lower panel, I have shown the temperature dependence of the High Spin (HS) population for the three different systems. For clarity, I have not shown the effects of the hysteresis associated with the first-order transitions.

If Delta H is smaller than the values shown in the figure, then at low temperatures, the spin-crossover system will never reach the complete low-spin state.

Main points.

Multiple steps are possible even in a simple model. This is because frustration stabilises new phases in a magnetic field. Similar phenomena occur in other frustrated models, such as the triangular lattice, the J1-J2 model on a chain or a square lattice.

The number of steps may change depending on Delta S. This is because a temperature sweep traverses the field-temperature phase diagram asymmetrically.

Caution.

Fluctuations matter.
The mean-field theory phase diagram was studied by Beath and Ryan. Their phase diagram is below. Clearly, there are significant qualitative differences, particularly in the stability of the A3B phase.
The transition temperature at zero field is 3.5 J, compared to the value of 1.4J from Monte Carlo.


Monte Carlo simulations may be fraught.
Because of the many competing ordered states associated with frustration, Kammerer et al. note that “in a Monte Carlo simulation one needs unusually large systems in order observe the correct asymptotic behaviour, and that the effect gets worse with decreasing temperature because of the proximity of the phase transition to the less ordered phase at T=0”. 

Open questions.

The example above hints at what the essential physics may be how frustrated Ising models may capture it. However, to definitively establish the connection with real materials, several issues need to be resolved.

1. Show definitively how elastic interactions can produce the necessary Ising interactions. In particular, derive a formula for the interactions in terms of elastic properties of the high-spin and low-spin states. How do their structural differences, and the associated bond stretches or compressions, affect the elastic energy? What is the magnitude, range, and direction of the interactions?

[n.b. Different authors have different expressions for the Ising interactions for a range of toy models, using a range of approximations. It also needs to be done for a general atomic "force field".]

2. For specific materials, calculate the Ising interactions from a DFT-based method. Then show that the relevant Ising model does produce the steps and hysteresis observed experimentally.


Tuesday, September 2, 2025

"Ferromagnetic" Ising models for spin-state transitions in organometallics

In recent posts, I discussed how "spin crossover" is a misnomer for the plethora of organometallic compounds that undergo spin-state phase transitions (abrupt, first-order, hysteretic, multi-step,...)

In theory development, it is best to start with the simplest possible model and then gradually add new features to the model until (hopefully) arriving at a minimal model that can describe (almost) everything. Hence, I described how the two-state model can describe spin crossover. An Ising "spin" has values of +1 or -1, corresponding to high spin (HS) and low spin (LS) states. The "magnetic" field is half of the difference in Gibbs free energy between the two states. 

The model predicts equal numbers of HS and LS at a temperature

The two-state model is modified by adding Ising-type interactions between the “spins” (molecules). The Hamiltonian is then of the form

 The temperature dependence in the field arises because this is an effective Hamiltonian.

The Ising-type interactions are due to elastic effects. The spin-state transition in the iron atom leads to changes in the Fe-N bond lengths (an increase of about 10 per cent in going from LS to HS), changing the size of the metal-ligand (ML6 ) complex. This affects the interactions (ionic, pi-pi, H-bond, van der Waals) between the complexes. The volume of the ML6 complex changes by about 30 per cent, but typically the volume of the crystal unit cell changes by only a few per cent. The associated relaxation energies are related to the J’s. Calculating them is non-trivial and will be discussed elsewhere. There are many competing and contradictory models for the elastic origin of the J’s.

In this post, I only consider nearest-neighbour ferromagnetic interactions. Later, I will consider antiferromagnetic interactions and further-neighbour interactions that lead to frustration. 

Slichter-Drickamer model

This model was introduced in 1972 is beloved by experimentalists, especially chemists, because it provides a simple analytic formula that can be fit to experimental data.

The system is assumed to be a thermodynamic mixture of HS and LS. x=n_HS(T) is the fraction of HS. The Gibbs free energy is given by

This is minimised as a function of x to give the temperature dependence of the HS population.

The model is a natural extension of the two-state model, by adding a single parameter, Gamma, which is sometimes referred to as the cooperativity parameter.

The model is equivalent to the mean-field treatment of a ferromagnetic Ising model, with Gamma=2zJ, where z is the number of nearest neighbours. Some chemists do not seem to be aware of this connection to Ising. The model is also identical to the theory of binary mixtures, such as discussed in Thermal Physics by Schroeder, Section 5.4.

Successes of the model.

good quantitative agreement with experiments on many materials.

a first-order transition with hysteresis for T_1/2 < Tc =z J.

a steep and continuous (abrupt) transition for T_1/2 slightly larger than Tc.

Values of Gamma are in the range 1-10 kJ/mol. Corresponding vaules of J are in the range 10-200 K, depending on what value of z is assumed.

Weaknesses of the model.

It cannot explain multi-step transitions.

Mean-field theory is quantitatively, and sometimes qualitatively, wrong, especially in one and two dimensions.

The description of hysteresis is an artefact of the mean-field theory, as discussed below.

Figure. Phase diagram of a ferromagnetic Ising model in a magnetic field. (Fig. 8.7.1, Chaikin and Lubensky). Vertical axis is the magnetic field, and the horizontal axis is temperature. Tc denotes the critical temperature, and the double-line denotes a first-order phase transition between paramagnetic phases where the magnetisation is parallel to the direction of the applied field.

Curves show the free energy as a function of the order parameter (magnetisation) in mean-field theory. The dashed lines are the lines of metastability deduced from these free-energy curves. Inside these lines, the free energy has two minima: the equilibrium one and a metastable one. The lines are sometimes referred to as spinodal curves.

The consequences of the metastability for a field sweep at constant temperature are shown in the Figure below, taken from Banerjee and Bar.

How does this relate to thermally induced spin-state transitions?

Consider the phase diagram shown above of a ferromagnetic Ising model in a magnetic field. The red and blue lines correspond to temperature scans for two SCO materials that have different values of the parameters Delta H and DeltaS.

The occurrence of qualitatively different behaviour is determined by where the lines intercept the temperature and field axes, i.e. the values of T_1/2 /J and Delta H/J. If the former is larger than Tc/J, as it is for the blue line, then no phase transition is observed. 

The parameter Delta H/J determines whether at low temperatures, the complete HS state is formed.

The figure below is a sketch of the temperature dependence of the population of HS for the red and blue cases.


Note that because of the non-zero slope of the red line, the temperature  T_1/2 is not the average of the temperatures at which the transition occurs on the up and down temperature sweeps.

Deconstructing hysteresis.

The physical picture above of metastability is an artefact (oversimplification) of mean-field theory. It predicts that an infinite system would take an infinite time to reach the equilibrium state from the metastable state.

(Aside: In the context of the corresponding discrete-choice models in economics, this has important and amusing consequences, as discussed by Bouchaud.)

In reality, the transition to the equilibrium state can occur via nucleation of finite domains or in some regimes via a perturbation with a non-zero wavevector. This is discussed in detail by Chaikin and Lubensky, chapter 4.

The consequence of this “metastability” for a first-order transition in an SCO system is that the width of the hysteresis region (in temperature) may depend on the rate at which the temperature is swept and whether the system is allowed to relax before the magnetisation (fraction of HS) is measured at any temperature. Emprically, this is observed and has been highlighted by Brooker, albeit without reference to the theoretical subtleties I am highlighting here. She points out that up to 2014, chemists seemed to have been oblivious to these issues and reported results without testing whether their observations depended on the sweep rate or whether they waited for relaxation.

(Aside. The dynamics are different for conserved and non-conserved order parameters. In a binary liquid mixture, the order parameter is conserved, i.e., the number of A and B atoms is fixed. In an SCO material, the number of HS and LS is not conserved.)

In the next post, I will discuss how an antiferromagnetic Ising model can give a two-step transition and models with frustrated interactions can give multi-step transitions.

Friday, August 22, 2025

The two-state model for spin crossover in organometallics

Previously, I discussed how spin-crossover is a misnomer for organometallic compounds and proposed that an effective Hamiltonian to describe the rich states and phase transitions is an Ising model in "magnetic field".

I introduce the two-state model that defines the model without the Ising interactions. To save me time on formatting in HTML, here is a pdf file that describes the model and what comparisons with experimental data (such as that below) tells us.

Future posts will consider how elastic interactions produce the Ising interaction and how frustrated interactions can produce multi-step transitions.

Wednesday, August 13, 2025

My review article on emergence

I just posted on the arXiv a long review article on emergence

Emergence: from physics to biology, sociology, and computer science

The abstract is below.

I welcome feedback. 

------

Many systems of interest to scientists involve a large number of interacting parts and the whole system can have properties that the individual parts do not. The system is qualitatively different to its parts. More is different. I take this novelty as the defining characteristic of an emergent property. Many other characteristics have been associated with emergence are reviewed, including universality, order, complexity, unpredictability, irreducibility, diversity, self-organisation, discontinuities, and singularities. However, it has not been established whether these characteristics are necessary or sufficient for novelty. A wide range of examples are given to show how emergent phenomena are ubiquitous across most sub-fields of physics and many areas of biology and social sciences. Emergence is central to many of the biggest scientific and societal challenges today. Emergence can be understood in terms of scales (energy, time, length, complexity) and the associated stratification of reality. At each stratum (level) there is a distinct ontology (properties, phenomena, processes, entities, and effective interactions) and epistemology (theories, concepts, models, and methods). This stratification of reality leads to semi-autonomous scientific disciplines and sub-disciplines. A common challenge is understanding the relationship between emergent properties observed at the macroscopic scale (the whole system) and what is known about the microscopic scale: the components and their interactions. A key and profound insight is to identify a relevant emergent mesoscopic scale (i.e., a scale intermediate between the macro- and micro- scales) at which new entities emerge and interact with one another weakly. In different words, modular structures may emerge at the mesoscale. Key theoretical methods are the development and study of effective theories and toy models. Effective theories describe phenomena at a particular scale and sometimes can be derived from more microscopic descriptions. Toy models involve minimal degrees of freedom, interactions, and parameters. Toy models are amenable to analytical and computational analysis and may reveal the minimal requirements for an emergent property to occur. The Ising model is an emblematic toy model that elucidates not just critical phenomena but also key characteristics of emergence. Many examples are given from condensed matter physics to illustrate the characteristics of emergence. A wide range of areas of physics are discussed, including chaotic dynamical systems, fluid dynamics, nuclear physics, and quantum gravity. The ubiquity of emergence in other fields is illustrated by neural networks, protein folding, and social segregation. An emergent perspective matters for scientific strategy, as it shapes questions, choice of research methodologies, priorities, and allocation of resources. Finally, the elusive goal of the design and control of emergent properties is considered.

Spin crossover is a misnomer

There are hundreds of organometallic compounds that are classified as spin-crossover compounds. As the temperature is varied the average spin per molecule can undergo a transition between low-spin and high-spin states.

The figure below shows several classes of transitions that have been observed. The vertical axis represents the fraction of molecules in the high-spin state, and the horizontal axis represents temperature.


a) A smooth crossover. At the temperature T_{1/2} there are equal numbers of high and low spins.

b) There is sharp transition with the curve having a very large slope at T_{1/2}.

c) There is a discontinuous change in the spin fraction at the transition temperature, the value of which depends on whether the temperature is increasing or decreasing, i.e., there is hysteresis. The discontinuity and hysteresis are characteristic of a first-order phase transition.

d) There is a step in the curve when the high-spin fraction is close to 0.5. This is known as a two-step transition.

e) Although a crossover occurs, the system never contains only low- or high-spins.

But, there is more. Over the past decade, multiple-step transitions have been observed. An example of a four-step transition is below.
Hysteresis is present and is larger at lower temperatures.

In a few cases of multiple-step transitions on the down-temperature sweep, the first step is missing compared to the up-temperature step.

Given the diverse behaviour described above, including sharp transitions and first-order phase transitions, spin "crossover" is a misnomer.

More importantly, given the chemical and structural complexity materials involved, is there a simple model effective Hamiltonian that can capture all this diverse behaviour?

Yes. An Ising model in a field. A preliminary discussion is here. I hope to discuss this in future posts. But first I need to introduce the simple two-state model and show what it can and cannot explain.

Saturday, August 2, 2025

Science job openings in sunny Brisbane, Australia

Bribie Island, just north of Brisbane.

The University of Queensland has just advertised several jobs that may be of interest to readers of this blog, particularly those seeking to flee the USA.

There is a junior faculty position for a theorist working at the interface of condensed matter, quantum chemistry, and quantum computing.

There is also a postdoc to work on the theory of strongly correlated electron systems with my colleagues Ben Powell and Carla Verdi.

There is a postdoc in experimental condensed matter, to work on scanning probe methods, such as STM, with my colleague Peter Jacobson.

Glasshouse Mountains. Just north of Brisbane.

Friday, July 25, 2025

Reviewing emergent computational abilities in Large Language Models

Two years ago, I wrote a post about a paper by Wei et al, Emergent Abilities of Large Language Models

Then last year, I posted about a paper Are Emergent Abilities of Large Language Models a Mirage? that criticised the first paper.

There is more to the story. The first paper has now been cited over 3,600 times. There is a helpful review of the state of the field.

Emergent Abilities in Large Language Models: A Survey

Leonardo Berti, Flavio Giorgi, Gjergji Kasneci

It begins with a discussion of what emergence is, quoting from Phil Anderson's More is Different article [which emphasised how new properties may appear when a system becomes large] and John Hopfield's Neural networks and physical systems with emergent collective computational abilities, which was the basis of his recent Nobel Prize. Hopfield stated

"Computational properties of use to biological organisms or the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons)."

Berti et al. observe, "Fast forward to the LLM era, notice how Hopfield's observations encompass all the computational tasks that LLMs can perform."

They discuss emergent abilities as in-context learning, defined as the "capability to generalise from a few examples to new tasks and concepts on which they have not been directly trained."

Here, I put this review in the broader context of the role of emergence in other areas of science.

Scales. 

Simple scales that describe how large an LLM is include the amount of computation, the number of model parameters, and the size of the training dataset. More complicated measures of scale include the number of layers in a deep neural network and the complexity of the training tasks.

Berti et al. note that the emergence of new computational abilities does not just follow from increases in the simple scales but can be tied to the training process. I note that this subtlety is consistent with experience in biology. Simple scales would be the length of an amino acid chain in a protein or base pairs in a DNA molecule, the number of proteins in a cell or the number of cells in an organism. More subtle scales include the number of protein interactions in a proteome or gene networks in a cell. Deducing what the relevant scales are is non-trivial. Furthermore, as emphasised by Denis Noble and Robert Bishop, context matters, e.g., a protein may only have a specific function if it is located in a specific cell.

Novelty. 

When they become sufficiently "large", LLMs have computational abilities that they were not explicitly designed for and that "small" versions do not have. 

The emergent abilities range "from advanced reasoning and in-context learning to coding and problem-solving."

The original paper by Wei et al. listed 137 emergent abilities in an Appendix!

Berti et al. give another example.

"Chen et al. [15] introduced a novel framework called AgentVerse, designed to enable and study collaboration among multiple AI agents. Through these interactions, the framework reveals emergent behaviors such as spontaneous cooperation, competition, negotiation, and the development of innovative strategies that were not explicitly programmed."

An alternative to defining novelty in terms of a comparison of the whole to the parts is to compare properties of the whole to those of a random configuration of the system. The performance of some LLMs is near-random (e.g., random guessing) until a critical threshold is reached (e.g., in size) when the emergent ability appears.

Discontinuities.

Are there quantitative objective measures that can be used to identify the emergence of a new computational ability? Researchers are struggling to find agreed-upon metrics that show clear discontinuities. That was the essential point of Are Emergent Abilities of Large Language Models a Mirage? 

In condensed matter physics, the emergence of a new state of matter is (usually) associated with symmetry breaking and an order parameter. Figuring out what the relevant broken symmetry and the order parameter often requires brilliant insight and may even lead to a Nobel Prize (Neel, Josephson, Ginzburg, Leggett,...) A similar argument can be made with respect to the development of the Standard Model of elementary particles and gauge fields. Furthermore, the discontinuities only exist in the thermodynamic limit (i.e., in the limit of an infinite system), and there are many subtleties associated with how the data from finite-size computer simulations should be plotted to show that the system really does exhibit a phase transition.

Unpredictability.

The observation of new computational abilities in LLMs was unanticipated and surprised many people, including the designers of the specific LLMs involved. This is similar to what happens in condensed matter physics, where new states of matter have mostly been discovered by serendipity.

Some authors seem surprised that it is difficult to predict emergent abilities. "While early scaling laws provided some insight, they often fail to anticipate discontinuous leaps in performance."

Given the largely "black box" nature of LLMs, I don't find it the unpredictability surprising. It is hard for condensed matter systems, and they are much better characterised and understood.

Modular structures at the mesoscale.

Modularity is a common characteristic of emergence. In a wide range of systems, from physics to biology to economics, a key step in the development of the theory of a specific emergent phenomenon has been the identification of a mesoscale (intermediate between the micro- and macro-scales) at which modular structures emerge. These modules interact weakly with one another, and the whole system can be understood in these terms. Identification of these structures and the effective theories describing them has usually required brilliant insight. An example is the concepts of quasiparticles in quantum many-body physics, pioneered by Landau.

Berti et al. do not mention the importance of this issue. However, they do mention that "functional modules emerge naturally during training" [Ref. 7,43,81,84] and that "specialised circuits activate at certain scaling thresholds [24]".

Modularity may be related to an earlier post, Why do deep learning algorithms work so well? In the training process, a neural network rids noisy input data of extraneous details...There is a connection between the deep learning algorithm, known as the "deep belief net" of Geoffrey Hinton, and renormalisation group methods (which can be key to identifying modularity and effective interactions).

Is emergence good or bad?

Undesirable and dangerous capabilities can emerge. Those observed include deception, manipulation, exploitation, and sycophancy.

These concerns parallel discussions in economics. Libertarians, the Austrian school, and Federich Hayek tend to see the emergence as only producing socially desirable outcomes, such as the efficiency of free markets [the invisible hand of Adam Smith]. However, emergence also produces bubbles and crashes and recessions.

Resistance to control

A holy grail is the design, manipulation, and control of emergent properties. This ambitious goal is promoted in materials science, medicine, engineering, economics, public policy, business management, and social activism. However, it largely remains elusive, arguably due to the complexity and unpredictability of the systems of interest. Emergent properties of LLMs may turn out to offer similar hopes, frustrations, and disappointments. We should try, but have realistic expectations.

Toy models.

This is not discussed in the review. As I have argued before, a key to understanding a specific emergent phenomenon is the development of toy models that illustrate the phenomenon and the possible essential ingredients for it to occur. The following paper may be a step in that direction.

An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem

Yoonsoo Nam, Nayara Fonseca, Seok Hyeong Lee, Chris Mingard, Ard A. Louis

In a similar vein, another possibly relevant paper is the review

Statistical Mechanics of Deep Learning

Yasaman Bahri, Jonathan Kadmon, Jeffrey Pennington1, Sam S. Schoenholz, Jascha Sohl-Dickstein and Surya Ganguli

They considered a toy model for the error landscape for a neural network, and show that the error function for a deep neural net of depth D corresponds to the energy function for a D-spin spherical spin glass. [Section 3.2 in their paper].

Friday, July 18, 2025

Emergence in Chemistry

It is important to be clear what the system is. Most of chemistry is not really about isolated molecules. A significant amount of chemistry occurs in an environment, often within a solvent. Then the system is the chemicals of interest and the solvent. For example, when it is stated that HCl is an acid, this is not a reference to isolated HCl molecules but a solution of HCl in water, and then the HCl dissociates into H+ and Cl- ions. Chemical properties such as reactivity can change significantly depending on whether a compound is in the solid, liquid, or gas state, or on the properties of the solvent in which it is dissolved.

Scales

The time scales for processes, which range from molecular vibrations to chemical reactions, can vary from femtoseconds to days. Relevant energy scales, corresponding to different effective interactions, can vary from tens of eV (strong covalent bonds) to microwave energies of 0.1 meV (quantum tunnelling in an ammonia maser).

Other scales are the total number of atoms in a compound, which can range from two to millions, the total number of electrons, and the number of different chemical elements in the compound. As the number of atoms and electrons increases, so does the dimensionality of the Hilbert space of the corresponding quantum system.

Novelty

All chemical compounds are composed of a discrete number of atoms, usually of different type. For example, acetic acid, denoted CH3COOH, is composed of carbon, oxygen, and hydrogen atoms. The compound usually has chemical and physical properties that the individual atoms do not have.

Chemistry is all about transformation. Reactants combine to produce products, e.g. A + B -> C. C may have chemical or physical properties that A and B did not have.

Chemistry involves concepts that do not appear in physics. Roald Hoffmann argued that concepts such as acidity and basicity, aromaticity, functional groups, and substituent effects have great utility and are lost in a reductionist perspective that tries to define them precisely and mathematicise them.

Diversity

Chemistry is a wonderland of diversity, as it arranges chemical elements in a multitude of different ways that produce a plethora of phenomena. Much of organic chemistry just involves three different atoms: carbon, oxygen, and hydrogen.

Molecular structure

Simple molecules (such as water, ammonia, carbon dioxide, methane, benzene) have a unique structure defined by fixed bond lengths and angles. In other words, there is a well-defined geometric structure that gives the locations of the centres of atomic nuclei. This is a classical entity. This emerges from the interactions between the electrons and nuclei of the constituent atoms.

In philosophical discussions of emergence in chemistry, molecular structure has received significant attention. Some claim it provides evidence of strong emergence. The arguments centre around the fact that the molecular structure is a classical entity and concept that is imposed, whereas a logically self-consistent approach would treat both electrons and nuclei quantum mechanically.

The molecular structure of ammonia (NH3) illustrates the issue. It has an umbrella structure which can be inverted. Classically, there are two possible degenerate structures. For an isolated molecule, quantum tunnelling back and forth between the two structures can occur. The ground state is a quantum superposition of two molecular structures. This tunnelling does occur in a dilute gas of ammonia at low temperature, and the associated quantum transition is the basis of the maser, the forerunner of the laser. This example of ammonia was discussed by Anderson at the beginning of his seminal More is Different article to illustrate how symmetry breaking leads to well-defined molecular structures in large molecules. 

Figure is taken from here.

Born-Oppenheimer approximation 

Without this concept, much of theoretical chemistry and condensed matter would be incredibly difficult. It is based on the separation of time and energy scales associated with electronic and nuclear motion.  It is used to describe and understand the dynamics of nuclei and electronic transitions in solids and molecules. The potential energy surfaces for different electronic states define effective theory for the nuclei. Without this concept, much of theoretical chemistry and condensed matter would be incredibly difficult.

Singularity. The Born-Oppenheimer approximation is justified by an asymptotic expansion in powers of (m/M)^1/4, where m is the mass of an electron and M the mass of an atomic nucleus in the molecule. This has been discussed by Primas and Bishop.

The rotational and vibrational degrees of freedom of molecules also involve a separation of time and energy scales. Consequently, one can derive separate effective Hamiltonians for the vibrational and rotational degrees of freedom.

Qualitative difference with increase in molecular size

Consider the following series with varying chemical properties: formic acid (CH2O2), acetic acid (C2H4O2), propionic acid (C3H6O2), butyric acid (C4H8O2), and valerianic acid (C5H10O2), whose members involve the successive addition of a CH2 radical. The Marxist Friedrich Engels used these examples as evidence for Hegel’s law: “The law of transformation of quantity into quality and vice versa”.

In 1961, Platt discussed properties of large molecules that “might not have been anticipated” from properties of their chemical subgroups. Table 1 in Platt’s paper lists “Properties of molecules in the 5- to 50-range that have no counterpart in diatomics and many triatomics.” Table 2 lists “Properties of molecules in the 50- to 500-atom range and up that go beyond the properties of their chemical sub-groups.” The properties listed included internal conversion (i.e., non-radiative decay of excited electronic states), formation of micelles for hydrocarbon chains with more than ten carbons, the helix-coil transition in polymers, chromatographic or molecular sorting properties of polyelectrolytes such as those in ion-exchange resins, and the contractility of long chains.

Platt also discussed the problem of molecular self-replication. Until 1951, it was assumed that a machine could not reproduce itself,f and this was the fundamental difference between machines and living systems. However, von Neumann showed that a machine with a sufficient number of parts and a sufficiently long list of instructions can reproduce itself. Platt pointed out that this suggested there is a threshold for autocatalysis: “this threshold marks an essentially discontinuous change in properties, and that fully-complex molecules larger than this size differ from all smaller ones in a property of central importance for biology.” Thus, self-replication is an emergent property. A modification of this idea has been pursued by Stuart Kauffman with regard to the origin of life, that when a network of chemical reactions is sufficiently large, it becomes self-replicating.

Thursday, July 10, 2025

What Americans might want to know about getting a job in an Australian university

Universities and scientific research in the USA are facing a dire future. Understandably, some scientists are considering leaving the USA. I have had a few enquiries about Australia. This makes sense, as Australia is a stable English-speaking country with similarities in education, culture, democracy, and economics. At least compared to most other possible destinations. Nevertheless, there are important differences between Australia and the USA to be aware of, particularly when it comes down to how universities function (and dis-function!) and how they hire people. 

A few people have asked me for advice. Below are some comparisons. Why should you believe me? I spent eleven years in the US (1983-1994) and visited at least once a year until 2018. On the other hand, there are some reasons to take what I say with a grain of salt. I have never been a faculty member in a US university. I retired four years ago from a faculty position in Australia. I actually haven't sat on a committee for almost ten years :). Hopefully, this post will prompt other readers to weigh in with other perspectives.

There are discussions in Australia about trying to attract senior people from the USA to come here. Whether that will come to anything substantial remains to be seen.

The best place to look for advertised positions is on Seek. 

Postdocs

This is where the news is best. Young people in the USA can apply for regular postdoc positions. Most are attached to specific grants and so involve working on a specific project. 

Ph.D. students

Most of the positions go to Australian citizens who get there own scholarship (fellowship) from the government. These are not tied to a grant or a supervisor (advisor) There are a few positions for international students, but not many. Usually they go to applicants with a Masters degree and publications.

Ph.D's are funded for 3 to 3.5 years. There is no required course work. Australian students have done a 4-year undergraduate degree and no Masters. This means tackling highly technical projects in theory is not realistic, except for exceptional students.

Faculty hiring is adhoc 

There is no hiring cycle. Positions tend to be advertised at random times depending on local politics, whims and bureaucracy. Universities and Schools (departments) claim they have strategic plans, but given fluctuations in funding, management, and government policy positions appear and disappear at random. Typically, the Dean (and their lackies), not the department, control the selection process, particularly for senior appointments. The emphasis is on metrics. Letters of reference are sometimes not even called for before short listing. Some hiring is done purely from online interviews and seminars.

Bias towards insiders 

People already in the Australian system know how to navigate it best. They may also already have a grant from the Australian Research Council and have done some teaching and (positive) student evaluations. They are known quantities to the managers and so a safer bet than outsiders. If you want to get a junior faculty position here (a lectureship) your chances may be better if you first come as a postdoc. However, there are exceptions...

Current funding crunches

Unfortunately, I fear the faculty market may be quite cool for the next few years. Many universities are actually trying to sack (fire) people due to funding shortfalls. These budget crises are due to post-covid, mismanagement, and the government trying to reduce international student numbers (due to the politics of a housing and cost-of-living crisis).

Australian Research Council

This is pretty much the sole source of funding in physics and chemistry. This is quite different to the USA where there were (pre-Trump) numerous funding agencies (NSF, DOE, DOD, ...).  They are currently reviewing and redesigning all their programs and so we will have to wait to see how this may impact the prospects of scientific refugees from the USA. (They used to have quite good Fellowship schemes for all career stages that were an excellent avenue for foreigners to come here). Some of my colleagues recommend following ARC Tracker on social media to be informed about the latest at ARC.

Thirty years ago, I came back to Australia from the USA. I had a wonderful stint doing science, largely because of generous ARC funding. Unfortunately, the system has declined. But I am sure it is better than being the USA right now.

There are many more things I could write about. Some have featured in previous rants about metrics and managerialism. Things to be aware of before accepting a job include faculty having little voice or power, student absenteeism, corrupt governance, and there is no real tenure or sabbaticals.

Can AI solve quantum-many body problems?

I find it difficult to wade through all the hype about AI, along with the anecdotes about its failings to reliably answer basic questions. G...