Saturday, April 4, 2026

My mental health journey

I have struggled with my mental health for most of my adult life. Here I tell my own story to put a personal face on the issue, and because when I have told it in the past, many people have found it helpful to know they are not alone in their mental health struggles. 

Any discussion of mental illness and healing involves assumptions about what we believe a human being is. The complexities illustrate the multifaceted character of reality. In the next post, I will examine the perspective of different scientific disciplines, including psychiatry, neuroscience, psychology, and sociology. Comparing these perspectives suggests the limitations of reductionism and that we cannot escape philosophical questions. Given the scientific uncertainty, any decisions about the treatment of mental illness involve traditions, authority, trust, and risk. Unfortunately, the personal stakes can be high. The issues are not just abstract philosophical ones.

Disclaimer. I am not a medical professional. If you are struggling with your mental health, I encourage you to consult a professional. Please don’t make any conclusions about your own situation from my experience. Everyone is different. That is some of my point in what follows. Specifically, you should not decide to stop taking medication without professional consultation.

When I was 23, I started to have significant mental health problems. I knew little about mental health and forty years ago there was limited public awareness about the issues. The first 22 years of my life were spent living in the same house in Australia with a stable family life, a predictable routine, and little stress. I then moved to the USA and encountered a completely different routine and environment as I began a Ph.D. There were many new opportunities and challenges: social, educational, and spiritual. I lived in a small single room in a college (dormitory) for graduate students, most of whom were international students like me. At every breakfast and dinner, I had to interact with strangers, mostly from other cultures. In hindsight, I tried to be an extrovert. After only three months, I burnt myself out. I was so exhausted that I started sleeping twelve hours a day and took a one-hour nap in the afternoon.

I could not continue with my Ph.D. even though the workload was relatively light and flexible at that point. I took one semester off. For the next four years, I was fragile, having to carefully limit my social interactions and work hours. Every few months, I would have a black period of one to two weeks, where my brain would not quite function, and so I could not do any physics reading or research. I would just go for long bike rides. Somehow, I survived by carefully monitoring my energy levels and ruthlessly limiting my activities.

Although this was an emotionally difficult and confusing time, I did not exhibit symptoms of depression such as sadness, despair, loss of hope, suicide ideation, or extreme anxiety. However, after four years of struggle I read a newspaper article about an episode of The Oprah Winfrey Show featuring depressed people.  I became aware that I might be experiencing clinical depression. I read a book on the subject by a Christian psychiatrist and went to see a psychiatrist at the university medical centre for students. She recommended that I go on an antidepressant drug, Imipramine. After a month or so, the change was amazing! I was like normal again. I had energy and clarity of thought that I had missed for four years. The black periods did not come back. I became convinced that I simply had a chemical imbalance in my brain and that the drugs restored the balance to the appropriate level. Back then, scientists were quite confident they knew how the drugs worked. Given that it was “just” a biochemical issue, I did not feel a need to address any psychological, spiritual, emotional, or lifestyle issues that might play a role in the depression.

To my relief, I was able to finish my Ph.D. Life continued positively for several years. One time, things did not seem to be going as well, and my psychiatrist asked me if by any chance I had switched to taking the generic brand medication. I had and so I went back to the original brand and everything went back to normal. Sometime after being married for a few years, I tried going off the medication and things went well. I put this success down to the benefits of married life and not living in group houses anymore.

At the end of the 1990s, I went through a very stressful time due to uncertainty in my academic employment. I got the flu and it took me weeks to recover. I decided to go back on the antidepressants. It did not have the desired positive effect. My anxiety went through the roof, so I discontinued the drugs. Somehow, I clawed my way back to normality and had a few good years.

In 2003, I went through a very stressful time trying to decide whether to accept an exciting job offer in England and dealing with a local conflict among church leaders. I had trouble sleeping and could not control my anxious thoughts. I went on the antidepressant Zoloft. Unlike previous episodes, I began to see a psychologist. She helped me to deconstruct some of my anxious thoughts and to question their rationality and connection to reality. She also introduced me to some mindfulness exercises promoted by Dr. Jon Kabat-Zin. I found these incredibly helpful. I did them once or twice a day for several years. They helped me slow down my racing mind and be more aware of my body and how it signalled stress. Over time, I returned to a relatively stable equilibrium, and I gradually tapered off the drugs, sessions with the psychologist, and the mindfulness exercises.

In the second half of 2016 my mental health struggles returned. I was doing too much international travel, including extended visits in South Asia. For a sensitive introverted Westerner who is easily overstimulated and enjoys peace and quiet, predictability, and smooth routines, South Asia can be overwhelming. Back in Australia in 2017, things did not improve, and so I went on the antidepressant Sertraline and went back to my psychologist. Returning to the mindfulness exercises, I did not find them helpful anymore. The psychologist said that was fine. Generally, things improved, probably partly because I decided to retire from the university and avoid international travel. Nevertheless, there were times during the pandemic, which started in 2020, that were difficult, as for many people.

I came to accept that I might be on antidepressants for the rest of my life. However, by 2024, my mental health was quite good, and my doctor made me aware that many medical professionals considered that being on antidepressants for long periods should be avoided because of long-term side effects. I read several articles about this in The Economist that were helpful.  My doctor and I agreed that I would slowly reduce the dosage over a period of several months and carefully monitor the situation. Everything went smoothly until around when I got to zero dosage. I would have periods of uncontrollable sobbing. I might read a moving newspaper article, or a friend would share something personal, and I would start sobbing. I learnt that this is one of many possible side effects of the drug withdrawal.  Fortunately, I did not have any of the other symptoms, some of which can be tragic. We decided to persevere and after a few months, the sobbing went away and my mental health remained stable. 

Currently, my mental health is the best it has been for a decade. It hard to know what the main contributing factors are. Some may include being retired, minimising stress where possible, pacing myself, saying no often, little international travel, enjoyable family relationships, having a pet dog, and cultivating healthy routines of exercise, diet, sleep, screen time, connections with nature, social interaction, and spiritual disciplines.

My story illustrates the general problem of interpreting our experiences. My recollections and the narrative I have given here reflect what I now consider significant. However, at different times, I might have told the story differently or interpreted it differently. I have also chosen not to include anecdotes about how I felt pressure from well-meaning people (professionals, family, friends, or acquaintances) to pursue or not pursue specific treatment options.

My experience illustrates the complexity of mental health. Deciding ways forward involved the puzzle of how to integrate the four dimensions: experience, reason, tradition, and transcendence. For one individual at a specific time in life, it is very hard to know with certainty what causes mental illness and what the best course of treatment is. Evidence of this uncertainty is seen in a survey of the perspectives of different scientific disciplines in the next blog post.

Sunday, March 15, 2026

Tony Leggett (1938-2026): condensed matter theorist

Tony Leggett died last week. The New York Times has a nice obituary. One measure of his influence on me is that more than 20 posts on this blog feature his work. He received the Nobel Prize in 2003 for developing the theory of superfluid 3He.

In 1972, a graduate student at Cornell, Doug Osheroff, discovered a phase transition around a temperature of 2 mK in liquid 3He. In the 1960s liquid 3He was established to be a Fermi liquid that was beautifully described by Landau's theory. Osheroff and his advisors, David Lee and Robert Richardson, incorrectly identified the phase transition as arising from antiferromagnetic order in the solid phase of 3He.

However, Leggett argued that it was actually due to superfluidity that there were two distinct superfluid phases, A and B, with different order parameters. 

Lee, Osheroff, and Richardson shared the Nobel Prize in 1996 for their discovery.

Leggett was primed to make rapid progress, as in 1965 and 1966 he had written three papers about superfluidity in liquid 3He, albeit assuming s-wave pairing. Indeed, by 1975 he wrote a comprehensive review article on the two superfluid phases.

For many reasons superfluid 3He was significant for the broader field of condensed matter. BCS showed that in elemental metals, superconductivity resulted from Cooper pairing of electrons due to an attractive electron-phonon interaction.  The order parameter (Cooper pair wave function) had s-wave spin singlet symmetry.

In contrast, superfluid 3He showed that Cooper pairing could also occur in a neutral Fermi liquid, and have non-trivial symmetry, i.e., p-wave symmetry and spin triplet. The order parameter has 18 components, compared to only 2 for elemental superconductors. There is spontaneous symmetry breaking of the local gauge symmetry, and spin or orbital rotational symmetries. 

The Cooper pairing in superfluid 3He is not due to a fermion-phonon interaction but due to spin fluctuations.

The fact that Cooper pairing was possible for different symmetries and mechanisms than for elemental superconductors was significant in that it meant it was reasonable to consider this possibility for superfluidity in neutron stars, and superconductivity in cuprates, strontium ruthenate, heavy fermions, and organic charge transfer salts.

There is rich physics associated with the symmetry breaking: 18 collective modes of the order parameter, textures such as boojums, and exotic vortex cores. For vortices, there is also some (controversial) connection to cosmic strings, including experiments that test the Kibble-Zurek mechanism and the electro-weak phase transition in the early universe.

Aside: My Ph.D. thesis was on the theory of the non-linear interaction of zero sound with the order parameter collective modes in the B-phase.

Leggett's development of the theory of superfluid 3He was amazing and certainly worthy of a Nobel. However, I think he made an even greater contribution to physics through his work on the theory of macroscopic quantum effects in Josephson junctions. This work was the basis for the experimental work that was honoured with the Nobel Prize last year.

With his student Amir Caldeira, Leggett performed concrete calculations of the effects of decoherence on quantum tunnelling in Josephson junctions.

[The NY Times obituary mistakenly says this work began after Leggett moved to Urbana. It was done while he was still at Sussex].

The formalism they developed involving the spectral density is the basis for most theoretical treatments of decoherence in superconducting qubits. A relevant toy model is the spin-boson model, and in 1987 Leggett published a seminal (but rather dense) review on the subject.

Leggett aided our understanding of cuprate superconductors. He contributed to the theoretical ideas that were the basis of the phase-sensitive measurements that established the d-wave nature of the order parameter. He also showed that experiments with inconsistent with  Anderson's interlayer tunneling theory.

I recommend reading Leggett's own scientific autobiography, Matchmaking Between Condensed Matter and Quantum Foundations, and Other Stories: My Six Decades in Physics and his book, The Problems of Physics

Thursday, March 5, 2026

A forgotten physicist: Amelia Frank (1906-1937)

In honour of International Women's Day, I bring to your attention a fascinating recent piece in The Conversation, Who was Amelia Frank? The life of a forgotten physicist, by Peter Jacobson and Beck Wise.

Amelia Frank was a PhD student of John Van Vleck. Her work was cited by him in his 1977 Nobel Lecture. In the early days of quantum theory, she explained deviations of the magnetic moments of the rare earth ions Sm3+ and Eu3+ from Hund's rule predictions. Tragically, she died from cancer when she was only 31.

Tuesday, February 24, 2026

Information theoretic measures for emergence and causality

The relationship between emergence and causation is contentious, with a long history. Most discussions are qualitative. Presented with a new system, how does one identify the microscopic and macroscopic scales that may be most useful for understanding and describing the system? Can Judea Pearl’s seminal ideas about causality be implemented practically for understanding emergence?

Broadly speaking, a weakness of discussions of emergence and causality is that it is hard to define these concepts in a rigorous and quantitative manner that makes them amenable to empirical testing, with respect to theoretical models and to experimental data. 

Fortunately, in the past decade, there have been some specific proposals to address this issue, mostly using information theory. A helpful recent review is by Yuan et al. 

“Two primary challenges take precedence in understanding emergence from a causal perspective. The first is establishing a quantitative definition of emergence, whereas the second involves identifying emergent behaviors or phenomena through data analysis.

To address the first challenge, two prominent quantitative theories of emergence have emerged in the past decade. The first is Erik Hoel et al.’s theory of causal emergence [19] whereas the second is Fernando E. Rosas et al.’s theory of emergence based on partial information decomposition [24].

Hoel et al.’s theory of causal emergence specifically addresses complex systems that are modeled using Markov chains. It employs the concept of effective information (EI) to quantify the extent of causal influence within Markov chains and enables comparisons of EI values across different scales [19,25]. Causal emergence is defined by the difference in the EI values between the macro-level and micro-level."

One perspective on causal emergence is that it occurs when the dynamics of a system at the macro-level is described more efficiently by macro-variables than by the dynamics of variables from the micro-level.

Klein et al. used Hoel’s information-theoretic measures of causal emergence to analyse protein interaction networks (interactomes) in over 1800 species, containing more than eight million protein–protein interactions, across different scales. They showed the emergence of ‘macroscales’ that are associated with lower noise and uncertainty. The nodes in the macroscale description of the network are more resilient than those in less coarse-grained descriptions. Greater causal emergence (i.e., a stronger macroscale description) was generally seen in multicellular organisms compared to single-cell organisms. The authors quantified causal emergence in terms of mutual information (between large and small scales) and effective information (a measure of the certainty in the connectivity of a network). Philip Ball (2023) (pages 218-220) gives an account of this work in terms of the emergence of multicellularity in biological evolution. He introduced the term causal spreading (pages 225-7), arguing that over the history of evolution the locus of causation has changed.

Yuan et al. continue

"However, in Hoel’s theory of causal emergence, it is essential to establish a coarse-graining strategy beforehand. Alternatively, the strategy can be derived by maximizing the effective information (EI) [19]. However, this task becomes challenging for large-scale systems due to the computational complexity involved. To address these problems, Rosas et al. introduced a new quantitative definition of causal emergence [24] that does not depend on coarse-graining methods, drawing from partial information decomposition (PID)-related theory. PID is an approach developed by Williams et al., which seeks to decompose the mutual information between a target and source variables into non-overlapping information atoms: unique, redundant, and synergistic information [29]…"

The Figure below is taken from Rosas et al. Xt^j (j=1,…,n) are microscopic variables that define a Markov chain. Vt is a macroscopic variable that is completely determined by the microscopic variables.

“Diagram of causally emergent relationships. Causally emergent features have predictive power beyond individual components. Downward causation takes place when that predictive power refers to individual elements; causal decoupling when it refers to itself or other high-order features.”

Rosas et al. applied the method to specific systems, including Conway’s Game of Life, Reynolds’ flocking model, and neural activity as measured by electrocorticography. More recently, it was used to describe emergence in computer science, including the identification of modular structures. Calculations were performed for specific examples, including Ehrenfest’s urn model for diffusion, the Ising model with Glauber dynamics, a Hopfield neural network model for associative memory.

Yuan et al. also state the following:

"The second challenge pertains to the identification of emergence from data. In an effort to address this issue, Rosas et al. derived a numerical method [24]. However, it is important to acknowledge that this method offers only a sufficient condition for emergence and is an approximate approach. Another limitation is that a coarse-grained macro-state variable should be given beforehand to apply this method."

Sas et al. recently stated

“Empirical applications of this framework to study emergence … including the study of gene regulatory networks [22], the dynamics of the human brain [23], the internal dynamics of reservoir computing [24], and the formation of useful internal representations in machine learning [25].”

Yuan et al. also discuss two significant connections between causal emergence and machine learning. First, machine learning can be used to improve calculations of causal emergence. Second, causal emergence measures can be used to better understand how machine learning works and improve it.

The work described above built on earlier work by Crutchfield, who claimed that the identification of emergence and hierarchies could be made operational, stating that “different scales are delineated by a succession of divergences in statistical complexity at lower levels.” More recently, Rupe and Crutchfield have reported progress towards identifying emergent self-organisation in a system.

Although this work on quantitative measures of emergence based on information theory represents significant progress, there are many open problems. Examples include the extension to non-Markovian systems and the development of computationally feasible methods for large systems. The latter is particularly important in physical systems where spontaneous symmetry breaking occurs, as this only happens in the thermodynamic limit of an infinite system.

There is an unrecognised similarity between the work described above and techniques recently developed to characterise phase transitions in statistical mechanics models such as the Ising model and classical dimer models. Coarse-graining (CG) is optimised by maximising the Real-Space Mutual Information (RSMI) between a spatial block and its distant environment. 

In general, maximising mutual information is notoriously hard but can be done using state-of-the-art machine learning algorithms. Gokmen et al. have developed an algorithm that they claim “can, unsupervised, construct order parameters, locate phase transitions, and identify spatial correlations and symmetries for complex and large-dimensional real-space data.” Furthermore, the optimal CG explicitly identifies the scaling operators associated with the critical point. 

The classical dimer model provides a stringent test as “the relevant low-energy degrees of freedom are profoundly different from the microscopic building blocks of the theory and change qualitatively throughout the phase diagram.” In other words, the emergent entities (quasiparticles such as vortices associated with the height field, which is described by a sine-Gordon field theory) are different from the dimers.

It is encouraging to see that two different scientific communities have developed similar ideas to address this challenging problem of making discussions about emergence and causality more concrete and quantitative.

Friday, February 13, 2026

A golden age for precision observational cosmology

Yin-Zhe Ma gave a nice physics colloquium at UQ last week, A Golden Age for Cosmology

I learnt a lot. Too often, colloquia are too specialised and technical for a general audience.

There are three pillars of experimental evidence for the Big Bang model: Hubble expansion of the universe, relative abundance of light nuclei due to nucleosynthesis in the first few minutes, and the Cosmic Microwave Background.

Ma showed Hubble's original data from 1929 for redshift versus distance of galaxies. There was a lot of noise in the data. Nevertheless, Hubble was right.

Big Bang Nucleosynthesis

This was first proposed in 1948 by Ralph Alpher and George Gamow. (Hans Bethe was an honorary author of the paper as a joke so that the author list would sound like the first three letters of the Greek alphabet. Gamow had a mischievous sense of humour.)

The chain of nuclear reactions that will produce the lightest elements and isotopes is shown below.

Because the binding energy of 4He is so large, it could have only been formed at an extremely high temperature of about 10^10 K. (Or is the issue activation energy for formation, not binding energy?)

Detailed calculations using parameters from terrestrial nuclear physics give the observed relative abundances of the elements. In particular, the universe is 74% hydrogen and 24 per cent helium.

The astrophysicist's periodic table showing the origin of the different chemical elements is rather cute.


Giving credit to George Gamow

Gamow, who died in 1968, made impressive contributions to theoretical physics. His Wikipedia page is worth reading. He claimed that he predicted the Cosmic Microwave Background in the late 1940s and did not receive sufficient credit when it was discovered in 1964. The 2019 Nobel Prize citation for James Peebles also minimises Gamow's early contributions. Whether this is fair or not can be debated.

Anisotropies in the Cosmic Microwave Background.

The past two decades have seen amazing advances in precision measurements of these anisotropies. The radiation is isotropic to one part in 25000, with a temperature of 2.72548±0.00057 K.

Measurements of the anisotropies have allowed precise determinations of key cosmological parameters by fitting theoretical predictions to the data shown below from the 2018 Planck collaboration. Different peaks have different physical origins. 

The level of precision in the data is truly amazing.


The solid line is a fit to theory involving six parameters. What would Enrico Fermi say? This is not "making the tale of an elephant wiggle" because the fit parameters are all consistent with independent determination of the cosmological parameters from Hubble expansion and the relative abundance of the light elements.

Aside. The paper from the Planck 2 collaboration has been cited 19000 times, but has almost 200 authors. How does one use that information in evaluating individual authors in job and promotion applications? How are they to be compared to a single-author paper with 100 citations or a five-author paper with 500 citations?

Is this a golden age for cosmology? 

Yes, in terms of precision measurements. 

On the theoretical side, the golden age may have passed. It is not clear that new concepts or theories will emerge. The outstanding questions are:

What is the nature and origin of dark matter? of dark energy? 

Why is the cosmological constant so small? Why is it so fine-tuned?

Can the validity of inflation be pinned down?

Does quantum gravity matter?

A lot of smart people have spent decades on these problems and made little progress. That fact does not preclude the possibility of a theoretical breakthrough. However, it does not make me optimistic. I hope I am wrong.

Thursday, February 5, 2026

The legacy of 40 years of cuprate superconductivity

In February 1986, Bednorz and Müller made a stunning discovery: superconductivity at a temperature of 35 K in a doped copper oxide (cuprate). Arguably, this discovery changed condensed matter physics. In April 1986, they submitted their results to Z. Phys. B. Only nineteen months later, they were awarded the Nobel Prize in Physics, the shortest time ever between a discovery and the award. A nice and short review of the history is here.

One measure of my estimate of the influence of this discovery is that it received about 5 pages of coverage in my Condensed Matter Physics: A Very Short Introduction. (See Chapter 5, Adventures in Flatland).

How things have developed over the past forty years, for better and worse, may be representative of how science advances: discovery by serendipity, hype about applications, unexpected secondary benefits, foundational questions, new concepts, unification, and incremental advances.

Hype about technological applications

On March 20, 1987, The New York Times had a front-page article, DISCOVERIES BRING A 'WOODSTOCK' FOR PHYSICS, by James Gleick. This followed the 1987 APS March meeting. It began 

"Physicists from three continents converged on the New York Hilton for a hastily scheduled special conference on a string of discoveries that seem certain to produce a rapid cascade of commercial applications in electricity, magnetism and electronics.There are many things we know and understand that we did not when they were first discovered."

This has largely been unfulfilled. There are a few niche applications, but cuprates are not used in electricity distribution or even in the superconducting magnets in hospital MRI machines, which are probably the main commercial application of superconductors. One of the significant obstacles is that it is hard to make wires from these materials, as they are ceramics. This is an example of the common gap between research laboratory science and commercially viable technology.

After 40 years, do we have a successful theory?

It depends on who you ask. But I would say there is a lot we do understand.

We have a phenomenological theory for all the macroscopic phenomena associated with the superconducting state: Ginzburg-Landau theory!

Properties of the superconducting state are well-described by a BCS wavefunction with a d-wave order parameter and the associated Bogoliubov quasiparticles. [This is somewhat puzzling, as in the metallic state quasi-particles are not well defined].

Although not everyone agrees, I think it is fair to say that the essential physics is in a one-band Hubbard model, and the key physics is:

strong electronic correlations,

a doped antiferromagnetic Mott insulator,

d-wave pairing that is "mediated"/caused from some mixture/variant of antiferromagnetic spin fluctuations or RVB spin singlets,.....

We certainly don't understand the cuprates at the same level as elemental superconductors. But we do understand the essential physics.

What is harder to describe and understand are the states adjacent to the superconducting state in the phase diagram: the pseudogap state and the strange metal.


Strongly correlated electron materials became a large, vibrant and unified field

Before 1986, there were small, disconnected communities intermittently interested in transition metal oxides, rare earths, Kondo impurities, Mott metal-insulator transitions, organic superconductors, heavy fermions, and quantum antiferromagnets.

The discovery of the cuprates brought together these communities as they found common interests, challenges, questions, concepts, and techniques.

The discovery of superconductivity in strontium ruthenate, alkali fullerides, iron pnictides and chalcogenides, twisted bilayer graphene and more cuprates, organic charge-transfer salts, and heavy fermions has shown how rich these systems are. The challenge is to understand the similarities and differences between these chemically and structurally diverse systems. In many of them, superconductivity is proximate to a Mott insulating state.

The unity and excitement were probably stimulated and enhanced by the activities and ideas of high-profile theorists such as Anderson, Schrieffer, Scalapino, Pines, Rice, and Varma. On the other hand, their acrimonious disagreements probably did not help.

Secondary theoretical benefits

The things I list below were not new ideas when the cuprate discovery happened. However, interest in the cuprates led them to become major research themes and ideas.

Importance of phase diagrams, including as a function of interaction parameters in toy models

Highlighting the limitations of electronic structure methods based on Density Functional Theory with approximate Exchange-Correlation functionals (i.e., anything computational). In the presence of strong correlations, DFT methods have spectacular failures. For example, predicting a metallic state instead of the Mott insulator.

Low dimensionality leads to qualitatively different behaviour, including the possibility of new types of order and quasiparticles. This is most dramatic in one dimension, where one has Luttinger liquids and spin-charge separation.

Spin liquids. Landau was wrong. Spontaneous symmetry breaking does not always occur in antiferromagnets.

Non-Fermi liquids. Landau was wrong. Not all metals are Fermi liquids.

Quantum criticality. Although this is a robust concept for certain toy models, whether it is relevant to the cuprates remains contentious.

Systematic improvements in approximation schemes and numerical techniques - exact diagonalisation, DMRG, DMFT, quantum Monte Carlo,...

Emergence. Chemical complexity and strong interactions can lead to new states of matter.

Secondary experimental benefits

Better probes. The desire to characterise the cuprates helped drive significant improvements in the resolution of ARPES (Angle-Resolved PhotoEmission Spectroscopy), STM (Scanning Tunnelling Microscopy), and inelastic neutron scattering. These advances have born fruit in the study of a wide range of other materials, beyond the cuprates.

Growth of single crystals. The early days of the cuprates produced a lot of junk experimental results because of the poor quality of the samples produced by "shake and bake". However, the involvement of solid-state chemists has improved things. The techniques have also led to the production of single crystals for a wide range of strongly correlated materials.

Why is there so little research on cuprates today?

Today, there is little research directly on cuprates, both theoretically and experimentally. It is hard to get funding to work on them, even though there is a lot we don't understand really well.

This is because of the problem of fashion in science. The low-lying fruit has been picked. There is a continuous new stream of materials being discovered with exotic properties, the latest being twisted bilayer van der Waals compounds.

Monday, January 26, 2026

What is absolute temperature?

The concept and reality of absolute temperature is amazing. It tells us something fundamental about the universe, including physical limits as to what is possible. The existence of absolute temperature is intimately connected with the existence of entropy as a thermodynamic state function. It also hints at the underlying quantum nature of reality.

Aside: Unfortunately, the Wikipedia page on this topic is mediocre and garbled. For example, it continues the myth that temperature is related to kinetic energy.

The zeroth law of thermodynamics allows the definition of empirical temperature. It is an equilibrium state variable that indicates whether a thermodynamic system will remain in the same state upon being brought into thermal contact with another system. Thermometers are systems with a single state variable.

Absolute temperature is a specific temperature scale that is central to thermodynamics and statistical mechanics. 

There are several equivalent definitions of absolute temperature. They start at different points. Except for the first one, the others show that the existence of absolute temperature is intimately connected to the second law and to entropy being an extensive quantity.

This is nicely discussed by Zemansky in chapter 8 of his text Heat and Thermodynamics, Fifth Edition (1968). [This was the text for my second year undergrad thermo course at ANU in 1980. At the time, I did not fully appreciate how profound some of it is. I just enjoyed all the multivariable calculus.] 

1. Ideal gas thermometers.

Consider a fixed mass of ideal gas whose volume is fixed. An ideal gas is defined as any gas at a temperature and pressure much larger than the critical temperature and pressure for the gas-liquid transition. Suppose the system is cooled and heated, and the pressure is measured as a function of the temperature measured by a separate thermometer calibrated by the Celsius scale. The pressure versus temperature curve is a straight line. If this line is extrapolated to zero pressure, this occurs at -273.15 degrees Celsius. The straight line has different slopes for different gases, but they all intercept the x-axis at the same point. Alternatively, one can take the pressure as fixed and measure the volume of the gas versus temperature. Extrapolation to zero volume also occurs at -273.15 degrees. 

This suggests that something special is happening at -273.15 degrees Celsius. One can define a special temperature scale where this temperature is zero. Historically, this was the beginning of the concept of absolute temperature.

However, we should be cautious about this approach. This is just an extrapolation and does not allow for the fact that ideal gases are rather special or that some very different physics might kick in below the critical temperature of helium.

2. The efficiency of Carnot cycles. 

This follows Zemansky (page 208). Consider a Carnot cycle abcda, where b to c and d to a are isothermal processes, between the same two reversible adiabatic surfaces, and involve heat transfers Q and Q_3, respectively. The absolute temperature scale T is defined by 

T/T_3 = Q/Q_3

with T_3 = 273.16, when the process d to a occurs at the triple point of water.

3. Integrating factor for heat

Heat is not a state property. It depends on processes. The first law says Delta Q = Delta U + P Delta V. If we consider a quasi-static process and integrate the heat transfer along the path taken (in state space), the result may depend on the path taken. On the other hand, if one integrates dQ/T, one finds that the result is independent of the path. This can then be used to define a new state variable, the entropy. 

The brief discussion above misses some subtle and profound features that only became clear in the 1960s following the work of Pippard, Turner, Landsberg, and Sears, which was inspired by an axiomatic approach to thermodynamics developed by Caratheodory.

Zemansky states

It is an extraordinary circumstance that not only does an integrating factor exist for the dQ of any system, but this integrating factor is a function of temperature only and is the same function for all systems! This universal character enables us to define an absolute temperature.

4. Applying the second law to a composite system

This treatment follows Schroeder, Thermal Physics (Section 3.1)

Schroeder defines entropy in terms of a multiplicity of states. However, I prefer to define entropy as the state function which tells us whether or not two states are accessible from one another by an adiabatic process. There are multiple possible versions of this empirical entropy state function, but let's choose one that is extensive, i.e., scales with the mass and volume of the system.

Consider an adiabatically isolated system containing an internal partition through which the conduction of heat can occur. Denote the two parts of the system by A and B. The entropy of each part can be written as a function U of its internal energy. 

The total entropy of the system can be written 

S = S_A (U_A) + S_B (U_B)

If the system is in thermal equilibrium, by the second law, the entropy of the whole system must be a minimum as a function of U_A and U_B.

Now, dU_A = - dU_B as the composite system is adiabatically isolated. Hence, we have.


The left-hand (right-hand) side of the equation only depends on the properties of system A (B). Thus, it is an intensive state variable which determines whether the system will be in equilibrium with another system. Hence, by the zeroth law, it defines a temperature scale.

T is the absolute temperature.

My mental health journey

I have struggled with my mental health for most of my adult life. Here I tell my own story to put a personal face on the issue, and because ...