Thursday, May 7, 2026

What is condensed matter physics?

 Every day we encounter a diversity of materials: liquids, glass, ceramics, metals, crystals, magnets, plastics, semiconductors, foams, … These materials look and feel different from one another. Their physical properties vary significantly: are they soft and squishy or hard and rigid? Shiny, black, or colourful? Do they absorb heat easily? Do they conduct electricity? The distinct physical properties of different materials are central to their use in technologies around us: smartphones, alloys, semiconductor chips, computer memories, cooking pots, magnets in MRI machines, LEDs in solid state lighting, and fibre optic cables. Consequently, the science of materials attracts researchers in a wide range of disciplines: physics, chemistry, biology, mathematics, and the varieties of engineering (electrical, chemical, mechanical, material…). But why do different materials have different physical properties? 

There are more than one hundred different types of atoms, or chemical elements, in the universe. Any material is composed of a specific collection of different atoms, and they are arranged in a particular spatial pattern within the material. A central question is: 

How are the physical properties of a material related to the properties of the atoms from which the material is made?

Extract from Chapter 1, Condensed Matter Physics: A Very Short Introduction

Tuesday, April 28, 2026

A mystery about science is that humans can do it

We are surrounded by scientific knowledge and have become so used to it that we often take science for granted. We may rarely reflect on the amazing revelations of science—and so miss the opportunity to recognize the awesome nature of the universe. Things that we know, learn, and do today in science would have been inconceivable decades, let alone centuries, ago. 

Einstein said, “The most incomprehensible thing about the universe is that it is comprehensible.”  For Einstein, the success of science was a wonderful mystery. As he wrote to his friend Maurice Solovine: 

. . . I consider the comprehensibility of the world (to the extent that we are authorized to speak of such a comprehensibility) as a miracle or as an eternal mystery. Well, a priori, one should expect a chaotic world, which cannot be grasped by the mind in any way . . . the kind of order created by Newton’s theory of gravitation, for example, is wholly different.  

There are several dimensions to the comprehensibility of the universe being mysterious. Einstein highlighted the first mystery, which is that there is order in the world, as reflected in scientific laws, such as Newton’s theory of gravity, and that this order can be succinctly stated in the language of mathematics. To the best of our knowledge, these laws hold for all time and everywhere in the universe. The existence of the orderly behaviour encoded in scientific laws is necessary for science to work, which leads to the second mystery. Why have we been able to discover these laws?

A second dimension that makes science possible is the intellectual abilities of humans. Humans not only have the rational ability to do science—to reason, to understand, to communicate—but also the ability to design instruments, such as telescopes and microscopes. There seems to be a connection between the rationality of the universe and human rationality. The idea that there may be harmony between the structures of the universe and those of the human mind has a long history.  In the Renaissance, it was encapsulated in the metaphor of the “music of the spheres”. In his book, Harmonies of the World (1619), Johannes Kepler connected music and his explanations of planetary orbits. Einstein said that “Mozart’s music is so pure and beautiful that I see it as a reflection of the inner beauty of the universe.” 

Humans might have been different. Suppose that the average human intelligence was lower than it is today, and the variation of human intelligence was smaller. Then, there might have been no Galileo, Isaac Newton, Robert Boyle, Charles Darwin, Albert Einstein, Richard Feynman, Phil Anderson, or Linus Pauling. Without these brilliant figures in scientific history, scientific progress would have been slow. 

The third dimension is that human language enables scientists to formulate, represent, and communicate ideas, theories, and the results of scientific experiments. This language sometimes involves mathematics, graphs, or tables of data. Scientists can understand one another. Even though there can be misunderstandings, these can be resolved. There is a scientific culture that transcends the diversity of cultures associated with different countries, linguistic groups, and ethnicities.

The fourth dimension is the physical dexterity of humans. I am a theoretical physicist not an experimental physicist. I am “all thumbs” and not particularly good in the lab. Consequently, I have done no laboratory work since I was a Ph.D. student. In contrast, some gifted scientists have an ability to do things in a laboratory that most people cannot. Their manual dexterity allows them to fabricate precision instruments, grow pure crystals, blow exquisite glassware, see faint images, and fine-tune electronic instruments in extraordinary ways. If some humans did not have such amazing abilities, scientific progress would have been much slower—or possibly non-existent.

A fifth dimension that makes science possible is the availability and processability of materials that have been central to scientific progress. Making instruments requires specific materials, such as metals, glass, rubber, insulators, plastics, and semiconductors. If we lived in a world where some of these materials were very rare or could not be processed to the purity or malleability required for scientific instruments, we would not have supercomputers, electron microscopes, or the James Webb Space Telescope today. We might be struggling to make even the simple telescopes used by Galileo.

These five dimensions are all required for humans to be able to do science. There are several additional mysteries of science.  These can be divided into two classes: what science can do and what we can learn about the universe from science. Science allows us to know certain things about reality (epistemology) and also to understand the nature of that reality (ontology). In other words, science helps us make maps of physical reality. The terrain represented by those maps is amazing. And the fact that we can make the maps is amazing.

Friday, April 24, 2026

Scandals in Australian universities

In Australia, scandals about the management of public universities continue to be covered in the media. A recent one is the use of billions of dollars to pay consulting firms to tell management which staff to sack and courses to cut because they are not making a profit.

Below is a recent episode of an ABC (Australian equivalent of BBC or PBS in USA) show on the topic, Chaos on Campus.


I tend to avoid engaging too much with media lamenting the state of unversities as I find it too disturbing. However, I was asked to reference the show in something I was asked to write and so felt I should watch it. It was painful.

This is definitely a scandal. However, it got me reflecting on something that I think gets virtually no media coverage and when I talk to people outside the university, they are pretty surprised and shocked. Anecdotal evidence from my colleagues is that attendance at lectures is now typically around 10-30 per cent of enrolment. Even before COVID-19, lectures at UQ were all recorded. Faculty have no choice. But only a few per cent of students watch the videos. This is quite demoralising for faculty.

What does this low level of student engagement mean for learning outcomes?

What is happening elsewhere? 

I did not find them that insightful.

One link is an article from The Guardian in Australia from last year. It highlights how moving things to online and lowering standards is driven by financial incentives. This ties in with the scandals in the video. The values of Australian universities are money, marketing, management, and metrics.

What is your own experience with the level of disengagement? How do you think this is affecting student learning? How are you and your colleagues adapting? Are academic standards being lowered? Any suggestions on ways forward?

Wednesday, April 15, 2026

The disappointing story of superconductivity in Strontium Ruthenate

In 1994 superconductivity was discovered in strontium ruthenate (Sr2RuO4). This attracted considerable interest because it had a perovskite crystal structure, just like the cuprates. Furthermore, it was a stoichiometric compound and so not plagued by impurities like the cuprates.

In 1998, things got more interesting when NMR Knight shift measurements were interpreted as evidence for triplet superconductivity.

Analogues were made with triplet Cooper pairing in superfluid 3He mediated by ferromagnetic spin fluctuations.

Triplet pairing is associated with odd-parity (spatial) and time-reversal symmetry breaking. Evidence for the latter was claimed from muon spin relaxation (muSR) and the polar Kerr effect.

There are subtle questions about whether a bulk sample of a triplet superconductor exhibits spontaneous magnetisation. Leggett discussed this in an Appendix of his textbook. It turns out the magnetisation probably only exists on the edges.

Aside. The metallic phase is of interest because (unlike the cuprates) it is a Fermi liquid. More recently, it has been argued to be a Hund's metal.

Fueled by hype about topological quantum computing, the past two decades have seen even greater interest in the material due to proposals that it may be a topological superconductor. See for example, this paper.

Now we come to the disappointment. It turns out that the original Knight shift measurements were flawed, probably due to a problem with thermometry.

Recent, careful Knight shift measurements suggest spin-singlet pairing. They were described in a Physics Today article by Alex Lopatka in 2021, An unconventional superconductor isn’t so odd after all. The article describes all the intricacies and challenges of these measurements. Stuart Brown is to be commended for persisting with this problem.

What about the Kerr effect and muSR measurements suggesting time-reversal symmetry breaking?

The polar Kerr effect involves rotation of the plane of polarisation of the electromagnetic radiation by an angle of 65 nanoradians! There is only one group in the world (at Stanford) that can detect these ultra-minute rotations.

muSR may also be problematic. It is not really known where the implanted muon sits in the crystal or what effect it has on the surrounding crystal structure. In particular, these perturbations may produce a small local magnetic field which is nothing to do with the claimed global field due to the magnetism associated with the triplet superconductivity. A recent preprint by Warren Pickett considers some of the challenges associated with interpreting these experiments as evidence for time-reversal symmetry breaking.

What is disappointing about this?
Obviously, it would be nice to have a triplet superconductor and even more a topological one.
However, for me, the big disappointment is that it took almost thirty years for the original NMR measurements to be checked and shown to be wrong. This may reflect several sociological problems.

Kauzmann's maxim: people will tend to believe what they want to believe rather than what the evidence before them might suggest.

The condensed matter community tends to be infatuated with exotica.

There is not enough application of Occam's razor. Luxury journals don't want simple explanations or authors to raise doubts or ambiguities.

As far as I am aware, the 1998 Nature paper on the NMR Knight shift has still not been retracted.

This post was stimulated by a helpful colloquium at UQ given recently by James Annett. He has worked on strontium ruthenate for many years and is a co-author of a relevant review article.

Update. 23 April. James Annett pointed out to me that the authors for the 1998 NMR published a paper in 2020 which acknowledges that their original paper was incorrect.

Reduction of the 17O Knight Shift in the Superconducting State and the Heat-up Effect by NMR Pulses on Sr2RuO4

Tuesday, April 7, 2026

A multi-disciplinary perspective on mental illness

How is mental illness defined? What causes mental illness? How can a person be healed? Answering these questions will be influenced by our answer to the question of what a person is. Returning to the stratification of reality resulting from emergence, we see that there are social, psychological, neurological, physiological, and genetic dimensions to a person. To illustrate the complexity, I now take a brief tour of different university departments to get their unique perspective on mental health. Each represents a different tradition.

Biomedicine

The biomedical model for mental illness is based on the idea that brains are machines involving physical and chemical processes. Mental illness occurs when these processes do not function normally. Over the past few decades, brain imaging techniques have shown differences between the brains of healthy patients and those with mental illnesses such as depression, schizophrenia, and bipolar disorder. The best course of treatment is deemed to be drugs that target the parts of the brain or processes that are dysfunctional. Sometimes, physical interventions such as electrical shock therapies or surgeries are advocated. This biomedical model was embraced and promoted by most psychiatrists until relatively recently.  

Antidepressant drugs have been widely prescribed, and now there are many studies examining their effectiveness, side effects, and biochemical mechanisms. I mention three scientific problems. First, there is a large placebo effect. This is found in studies where two groups of patients are told they are receiving an antidepressant drug. One group receive the actual drug, and the second group receives a placebo, a pill that, unknown to them, does not contain the drug. The proportion of patients reporting a significant improvement in mental health was about 25% for taking the actual drug compared to 10% for those taking the placebo. In other words, it seems that believing one will get better can lead to significant improvements in mental health. 

Second, there is a large variation between patients concerning how effective the drugs are. Patients’ perceptions of change in their mental health range from getting slight worse to no change to large improvements. Third, the biochemical mechanism of the drugs has become controversial. When the class of drugs known as Selective Serotonin Reuptake Inhibitors (SSRIs) were introduced, psychiatrists were confident that they knew how they work. Depressed patients lacked serotonin. SSRIs blocked the reuptake of serotonin into neurons, increasing the levels of this neurotransmitter in the synaptic cleft. However, a recent meta-analysis concluded as follows. 

“The main areas of serotonin research provide no consistent evidence of there being an association between serotonin and depression, and no support for the hypothesis that depression is caused by lowered serotonin activity or concentrations. Some evidence was consistent with the possibility that long-term antidepressant use reduces serotonin concentration.”

In her book, Mind Fixers: Psychiatry's Troubled Search for the Biology of Mental Illness, Anne Harrington, a historian of science at Harvard, commented.  

“Today one is hard-pressed to find anyone knowledgeable who believes that the so-called biological revolution of the 1980’s made good on most or even any of its therapeutic and scientific promises. It is now increasingly clear to the general public that it overreached, overpromised, overdiagnosed, overmedicated and compromised its principles.”

Psychiatry is a tradition, for better or worse. Its proponents persist in their faith that the biomedical model has the best answers to mental illness, even though the evidence for this belief is ambiguous. Science can involve faith. 

The stakes are high. If a patient takes medication, they may get better, worse, or experience no change. If they don’t take medication, they risk missing out on healing.

Psychology 

Psychologists present a multitude of theories of and treatment plans for mental illnesses. The focus is not on biology but on mental processes. Some focus on the subconscious and others on thoughts we are aware of and can articulate. Some focus on current life experience and thinking patterns, whilst others delve into the past, including unresolved childhood conflict or trauma. Sigmund Freud, the founder of psychoanalysis, claimed that depression was due to aggression toward the self.  A century later, there is no empirical evidence to support his claim. Other psychologists claim depression is predominantly a loss of hope. Opinion is divided about the best method of psychotherapy, where a patient has regular sessions with a trained professional to address unhelpful thoughts, emotions, and behaviours. Names for different methods include Cognitive Behavioural Therapy (CBT), Dialectical Behaviour Therapy (DBT), Psychodynamic Therapy, Humanistic Therapy, and Acceptance and Commitment Therapy (ACT).  Central to CBT is the claim that "Irrational thinking is at the root of much emotional distress that people experience."

This diversity of perspectives and treatments highlights the level of scientific uncertainty about both causes and treatment.

I now mention three developments that are receiving increasing attention in psychology research and have a transcendent dimension.

Mindfulness practices. These involve training patients to focus their “attention on the present moment—thoughts, feelings, sensations, and environment—with an attitude of openness, curiosity, and non-judgment. It involves observing experiences directly, rather than overthinking or reacting impulsively. Key elements include breathing techniques, meditation, and bringing awareness to daily activities.”  (Google AI overview).

Forgiveness. The American Psychological Association offers a continuing education article that cites studies showing that practising forgiveness can improve mental health.  

Awe and wonder. Dacher Keltner has made extensive studies of the experience of awe and recounted them in a popular book.  In a recent article with Maria Monroy, they:  “review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes—shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning—that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.”

Integrated medicine

The past few decades have seen the rise of integrated medicine, which promotes the view that many diseases, both physical and mental, are best treated by a holistic approach that combines treatments from different specialists. For mental health, it proposes that treatments might include not just drug and talking therapies but also address lifestyle issues. This means considering the role of sleep, exercise, diet, stress reduction, connection to nature, and screen time. With regard to diet, this builds on recent research showing deep connections between what goes on in the gut and the brain. Perhaps this is not surprising because our brains are not disembodied. They are part of our bodies and are connected to our whole nervous system.

Sociology 

Sociologists have investigated how mental illness can arise from social isolation. Emile Durkheim (1858-1917) was one of the founders of sociology. His book, Suicide: A Study in Sociology was published in 1897 and pioneered the scientific study of social phenomena. He proposed that suicide comes in four types, being distinguished by the level of imbalance of two social forces: social integration and moral regulation. Based on a detailed analysis of statistical data, Durkheim concluded that suicide was more likely in men than women, for single people than those who are married, for people without children than people with children, among Protestants than Catholics and Jews, among soldiers than civilians, and in times of peace than in times of war.

Since Durkheim, many more sociological studies suggest that social isolation and a lack of meaningful relationships can be a major contributing factor to depression. Some of this research has been reviewed in a popular book, Lost Connections: Uncovering the Real Causes of Depression and the Unexpected Solutions by Johann Hari.  He was motivated by his own experience of being prescribed and taking antidepressants for many years without consideration of how his social isolation might be a contributing factor.

This short survey of the perspective on mental illness from a range of scientific disciplines illustrates the complexity of the issue, the multifaceted nature of reality, and scientific uncertainty.

Naturally, this survey of different scientific perspectives raises questions about my own experience. Why did the antidepressant drugs seem to work sometimes and not others? Did I experience a placebo effect? Why was mindfulness helpful to me two decades ago but not more recently? What was the role of stress, childhood experiences, social isolation, personal pride, or introversion in creating my mental illness? I simply don’t know the answers to these questions and don’t think I ever will. What does matter is that, somehow at different times, I did experience degrees of healing that allowed me to function, albeit sometimes at diminished levels. Regardless of which traditions you choose to guide your journey and whatever choices you make, trust (faith) is involved.

Saturday, April 4, 2026

My mental health journey

I have struggled with my mental health for most of my adult life. Here I tell my own story to put a personal face on the issue, and because when I have told it in the past, many people have found it helpful to know they are not alone in their mental health struggles. 

Any discussion of mental illness and healing involves assumptions about what we believe a human being is. The complexities illustrate the multifaceted character of reality. In the next post, I will examine the perspective of different scientific disciplines, including psychiatry, neuroscience, psychology, and sociology. Comparing these perspectives suggests the limitations of reductionism and that we cannot escape philosophical questions. Given the scientific uncertainty, any decisions about the treatment of mental illness involve traditions, authority, trust, and risk. Unfortunately, the personal stakes can be high. The issues are not just abstract philosophical ones.

Disclaimer. I am not a medical professional. If you are struggling with your mental health, I encourage you to consult a professional. Please don’t make any conclusions about your own situation from my experience. Everyone is different. That is some of my point in what follows. Specifically, you should not decide to stop taking medication without professional consultation.

When I was 23, I started to have significant mental health problems. I knew little about mental health and forty years ago there was limited public awareness about the issues. The first 22 years of my life were spent living in the same house in Australia with a stable family life, a predictable routine, and little stress. I then moved to the USA and encountered a completely different routine and environment as I began a Ph.D. There were many new opportunities and challenges: social, educational, and spiritual. I lived in a small single room in a college (dormitory) for graduate students, most of whom were international students like me. At every breakfast and dinner, I had to interact with strangers, mostly from other cultures. In hindsight, I tried to be an extrovert. After only three months, I burnt myself out. I was so exhausted that I started sleeping twelve hours a day and took a one-hour nap in the afternoon.

I could not continue with my Ph.D. even though the workload was relatively light and flexible at that point. I took one semester off. For the next four years, I was fragile, having to carefully limit my social interactions and work hours. Every few months, I would have a black period of one to two weeks, where my brain would not quite function, and so I could not do any physics reading or research. I would just go for long bike rides. Somehow, I survived by carefully monitoring my energy levels and ruthlessly limiting my activities.

Although this was an emotionally difficult and confusing time, I did not exhibit symptoms of depression such as sadness, despair, loss of hope, suicide ideation, or extreme anxiety. However, after four years of struggle I read a newspaper article about an episode of The Oprah Winfrey Show featuring depressed people.  I became aware that I might be experiencing clinical depression. I read a book on the subject by a Christian psychiatrist and went to see a psychiatrist at the university medical centre for students. She recommended that I go on an antidepressant drug, Imipramine. After a month or so, the change was amazing! I was like normal again. I had energy and clarity of thought that I had missed for four years. The black periods did not come back. I became convinced that I simply had a chemical imbalance in my brain and that the drugs restored the balance to the appropriate level. Back then, scientists were quite confident they knew how the drugs worked. Given that it was “just” a biochemical issue, I did not feel a need to address any psychological, spiritual, emotional, or lifestyle issues that might play a role in the depression.

To my relief, I was able to finish my Ph.D. Life continued positively for several years. One time, things did not seem to be going as well, and my psychiatrist asked me if by any chance I had switched to taking the generic brand medication. I had and so I went back to the original brand and everything went back to normal. Sometime after being married for a few years, I tried going off the medication and things went well. I put this success down to the benefits of married life and not living in group houses anymore.

At the end of the 1990s, I went through a very stressful time due to uncertainty in my academic employment. I got the flu and it took me weeks to recover. I decided to go back on the antidepressants. It did not have the desired positive effect. My anxiety went through the roof, so I discontinued the drugs. Somehow, I clawed my way back to normality and had a few good years.

In 2003, I went through a very stressful time trying to decide whether to accept an exciting job offer in England and dealing with a local conflict among church leaders. I had trouble sleeping and could not control my anxious thoughts. I went on the antidepressant Zoloft. Unlike previous episodes, I began to see a psychologist. She helped me to deconstruct some of my anxious thoughts and to question their rationality and connection to reality. She also introduced me to some mindfulness exercises promoted by Dr. Jon Kabat-Zin. I found these incredibly helpful. I did them once or twice a day for several years. They helped me slow down my racing mind and be more aware of my body and how it signalled stress. Over time, I returned to a relatively stable equilibrium, and I gradually tapered off the drugs, sessions with the psychologist, and the mindfulness exercises.

In the second half of 2016 my mental health struggles returned. I was doing too much international travel, including extended visits in South Asia. For a sensitive introverted Westerner who is easily overstimulated and enjoys peace and quiet, predictability, and smooth routines, South Asia can be overwhelming. Back in Australia in 2017, things did not improve, and so I went on the antidepressant Sertraline and went back to my psychologist. Returning to the mindfulness exercises, I did not find them helpful anymore. The psychologist said that was fine. Generally, things improved, probably partly because I decided to retire from the university and avoid international travel. Nevertheless, there were times during the pandemic, which started in 2020, that were difficult, as for many people.

I came to accept that I might be on antidepressants for the rest of my life. However, by 2024, my mental health was quite good, and my doctor made me aware that many medical professionals considered that being on antidepressants for long periods should be avoided because of long-term side effects. I read several articles about this in The Economist that were helpful.  My doctor and I agreed that I would slowly reduce the dosage over a period of several months and carefully monitor the situation. Everything went smoothly until around when I got to zero dosage. I would have periods of uncontrollable sobbing. I might read a moving newspaper article, or a friend would share something personal, and I would start sobbing. I learnt that this is one of many possible side effects of the drug withdrawal.  Fortunately, I did not have any of the other symptoms, some of which can be tragic. We decided to persevere and after a few months, the sobbing went away and my mental health remained stable. 

Currently, my mental health is the best it has been for a decade. It hard to know what the main contributing factors are. Some may include being retired, minimising stress where possible, pacing myself, saying no often, little international travel, enjoyable family relationships, having a pet dog, and cultivating healthy routines of exercise, diet, sleep, screen time, connections with nature, social interaction, and spiritual disciplines.

My story illustrates the general problem of interpreting our experiences. My recollections and the narrative I have given here reflect what I now consider significant. However, at different times, I might have told the story differently or interpreted it differently. I have also chosen not to include anecdotes about how I felt pressure from well-meaning people (professionals, family, friends, or acquaintances) to pursue or not pursue specific treatment options.

My experience illustrates the complexity of mental health. Deciding ways forward involved the puzzle of how to integrate the four dimensions: experience, reason, tradition, and transcendence. For one individual at a specific time in life, it is very hard to know with certainty what causes mental illness and what the best course of treatment is. Evidence of this uncertainty is seen in a survey of the perspectives of different scientific disciplines in the next blog post.

Sunday, March 15, 2026

Tony Leggett (1938-2026): condensed matter theorist

Tony Leggett died last week. The New York Times has a nice obituary. One measure of his influence on me is that more than 20 posts on this blog feature his work. He received the Nobel Prize in 2003 for developing the theory of superfluid 3He.

In 1972, a graduate student at Cornell, Doug Osheroff, discovered a phase transition around a temperature of 2 mK in liquid 3He. In the 1960s liquid 3He was established to be a Fermi liquid that was beautifully described by Landau's theory. Osheroff and his advisors, David Lee and Robert Richardson, incorrectly identified the phase transition as arising from antiferromagnetic order in the solid phase of 3He.

However, Leggett argued that it was actually due to superfluidity that there were two distinct superfluid phases, A and B, with different order parameters. 

Lee, Osheroff, and Richardson shared the Nobel Prize in 1996 for their discovery.

Leggett was primed to make rapid progress, as in 1965 and 1966 he had written three papers about superfluidity in liquid 3He, albeit assuming s-wave pairing. Indeed, by 1975 he wrote a comprehensive review article on the two superfluid phases.

For many reasons superfluid 3He was significant for the broader field of condensed matter. BCS showed that in elemental metals, superconductivity resulted from Cooper pairing of electrons due to an attractive electron-phonon interaction.  The order parameter (Cooper pair wave function) had s-wave spin singlet symmetry.

In contrast, superfluid 3He showed that Cooper pairing could also occur in a neutral Fermi liquid, and have non-trivial symmetry, i.e., p-wave symmetry and spin triplet. The order parameter has 18 components, compared to only 2 for elemental superconductors. There is spontaneous symmetry breaking of the local gauge symmetry, and spin or orbital rotational symmetries. 

The Cooper pairing in superfluid 3He is not due to a fermion-phonon interaction but due to spin fluctuations.

The fact that Cooper pairing was possible for different symmetries and mechanisms than for elemental superconductors was significant in that it meant it was reasonable to consider this possibility for superfluidity in neutron stars, and superconductivity in cuprates, strontium ruthenate, heavy fermions, and organic charge transfer salts.

There is rich physics associated with the symmetry breaking: 18 collective modes of the order parameter, textures such as boojums, and exotic vortex cores. For vortices, there is also some (controversial) connection to cosmic strings, including experiments that test the Kibble-Zurek mechanism and the electro-weak phase transition in the early universe.

Aside: My Ph.D. thesis was on the theory of the non-linear interaction of zero sound with the order parameter collective modes in the B-phase.

Leggett's development of the theory of superfluid 3He was amazing and certainly worthy of a Nobel. However, I think he made an even greater contribution to physics through his work on the theory of macroscopic quantum effects in Josephson junctions. This work was the basis for the experimental work that was honoured with the Nobel Prize last year.

With his student Amir Caldeira, Leggett performed concrete calculations of the effects of decoherence on quantum tunnelling in Josephson junctions.

[The NY Times obituary mistakenly says this work began after Leggett moved to Urbana. It was done while he was still at Sussex].

The formalism they developed involving the spectral density is the basis for most theoretical treatments of decoherence in superconducting qubits. A relevant toy model is the spin-boson model, and in 1987 Leggett published a seminal (but rather dense) review on the subject.

Leggett aided our understanding of cuprate superconductors. He contributed to the theoretical ideas that were the basis of the phase-sensitive measurements that established the d-wave nature of the order parameter. He also showed that experiments with inconsistent with  Anderson's interlayer tunneling theory.

I recommend reading Leggett's own scientific autobiography, Matchmaking Between Condensed Matter and Quantum Foundations, and Other Stories: My Six Decades in Physics and his book, The Problems of Physics

Thursday, March 5, 2026

A forgotten physicist: Amelia Frank (1906-1937)

In honour of International Women's Day, I bring to your attention a fascinating recent piece in The Conversation, Who was Amelia Frank? The life of a forgotten physicist, by Peter Jacobson and Beck Wise.

Amelia Frank was a PhD student of John Van Vleck. Her work was cited by him in his 1977 Nobel Lecture. In the early days of quantum theory, she explained deviations of the magnetic moments of the rare earth ions Sm3+ and Eu3+ from Hund's rule predictions. Tragically, she died from cancer when she was only 31.

Tuesday, February 24, 2026

Information theoretic measures for emergence and causality

The relationship between emergence and causation is contentious, with a long history. Most discussions are qualitative. Presented with a new system, how does one identify the microscopic and macroscopic scales that may be most useful for understanding and describing the system? Can Judea Pearl’s seminal ideas about causality be implemented practically for understanding emergence?

Broadly speaking, a weakness of discussions of emergence and causality is that it is hard to define these concepts in a rigorous and quantitative manner that makes them amenable to empirical testing, with respect to theoretical models and to experimental data. 

Fortunately, in the past decade, there have been some specific proposals to address this issue, mostly using information theory. A helpful recent review is by Yuan et al. 

“Two primary challenges take precedence in understanding emergence from a causal perspective. The first is establishing a quantitative definition of emergence, whereas the second involves identifying emergent behaviors or phenomena through data analysis.

To address the first challenge, two prominent quantitative theories of emergence have emerged in the past decade. The first is Erik Hoel et al.’s theory of causal emergence [19] whereas the second is Fernando E. Rosas et al.’s theory of emergence based on partial information decomposition [24].

Hoel et al.’s theory of causal emergence specifically addresses complex systems that are modeled using Markov chains. It employs the concept of effective information (EI) to quantify the extent of causal influence within Markov chains and enables comparisons of EI values across different scales [19,25]. Causal emergence is defined by the difference in the EI values between the macro-level and micro-level."

One perspective on causal emergence is that it occurs when the dynamics of a system at the macro-level is described more efficiently by macro-variables than by the dynamics of variables from the micro-level.

Klein et al. used Hoel’s information-theoretic measures of causal emergence to analyse protein interaction networks (interactomes) in over 1800 species, containing more than eight million protein–protein interactions, across different scales. They showed the emergence of ‘macroscales’ that are associated with lower noise and uncertainty. The nodes in the macroscale description of the network are more resilient than those in less coarse-grained descriptions. Greater causal emergence (i.e., a stronger macroscale description) was generally seen in multicellular organisms compared to single-cell organisms. The authors quantified causal emergence in terms of mutual information (between large and small scales) and effective information (a measure of the certainty in the connectivity of a network). Philip Ball (2023) (pages 218-220) gives an account of this work in terms of the emergence of multicellularity in biological evolution. He introduced the term causal spreading (pages 225-7), arguing that over the history of evolution the locus of causation has changed.

Yuan et al. continue

"However, in Hoel’s theory of causal emergence, it is essential to establish a coarse-graining strategy beforehand. Alternatively, the strategy can be derived by maximizing the effective information (EI) [19]. However, this task becomes challenging for large-scale systems due to the computational complexity involved. To address these problems, Rosas et al. introduced a new quantitative definition of causal emergence [24] that does not depend on coarse-graining methods, drawing from partial information decomposition (PID)-related theory. PID is an approach developed by Williams et al., which seeks to decompose the mutual information between a target and source variables into non-overlapping information atoms: unique, redundant, and synergistic information [29]…"

The Figure below is taken from Rosas et al. Xt^j (j=1,…,n) are microscopic variables that define a Markov chain. Vt is a macroscopic variable that is completely determined by the microscopic variables.

“Diagram of causally emergent relationships. Causally emergent features have predictive power beyond individual components. Downward causation takes place when that predictive power refers to individual elements; causal decoupling when it refers to itself or other high-order features.”

Rosas et al. applied the method to specific systems, including Conway’s Game of Life, Reynolds’ flocking model, and neural activity as measured by electrocorticography. More recently, it was used to describe emergence in computer science, including the identification of modular structures. Calculations were performed for specific examples, including Ehrenfest’s urn model for diffusion, the Ising model with Glauber dynamics, a Hopfield neural network model for associative memory.

Yuan et al. also state the following:

"The second challenge pertains to the identification of emergence from data. In an effort to address this issue, Rosas et al. derived a numerical method [24]. However, it is important to acknowledge that this method offers only a sufficient condition for emergence and is an approximate approach. Another limitation is that a coarse-grained macro-state variable should be given beforehand to apply this method."

Sas et al. recently stated

“Empirical applications of this framework to study emergence … including the study of gene regulatory networks [22], the dynamics of the human brain [23], the internal dynamics of reservoir computing [24], and the formation of useful internal representations in machine learning [25].”

Yuan et al. also discuss two significant connections between causal emergence and machine learning. First, machine learning can be used to improve calculations of causal emergence. Second, causal emergence measures can be used to better understand how machine learning works and improve it.

The work described above built on earlier work by Crutchfield, who claimed that the identification of emergence and hierarchies could be made operational, stating that “different scales are delineated by a succession of divergences in statistical complexity at lower levels.” More recently, Rupe and Crutchfield have reported progress towards identifying emergent self-organisation in a system.

Although this work on quantitative measures of emergence based on information theory represents significant progress, there are many open problems. Examples include the extension to non-Markovian systems and the development of computationally feasible methods for large systems. The latter is particularly important in physical systems where spontaneous symmetry breaking occurs, as this only happens in the thermodynamic limit of an infinite system.

There is an unrecognised similarity between the work described above and techniques recently developed to characterise phase transitions in statistical mechanics models such as the Ising model and classical dimer models. Coarse-graining (CG) is optimised by maximising the Real-Space Mutual Information (RSMI) between a spatial block and its distant environment. 

In general, maximising mutual information is notoriously hard but can be done using state-of-the-art machine learning algorithms. Gokmen et al. have developed an algorithm that they claim “can, unsupervised, construct order parameters, locate phase transitions, and identify spatial correlations and symmetries for complex and large-dimensional real-space data.” Furthermore, the optimal CG explicitly identifies the scaling operators associated with the critical point. 

The classical dimer model provides a stringent test as “the relevant low-energy degrees of freedom are profoundly different from the microscopic building blocks of the theory and change qualitatively throughout the phase diagram.” In other words, the emergent entities (quasiparticles such as vortices associated with the height field, which is described by a sine-Gordon field theory) are different from the dimers.

It is encouraging to see that two different scientific communities have developed similar ideas to address this challenging problem of making discussions about emergence and causality more concrete and quantitative.

Friday, February 13, 2026

A golden age for precision observational cosmology

Yin-Zhe Ma gave a nice physics colloquium at UQ last week, A Golden Age for Cosmology

I learnt a lot. Too often, colloquia are too specialised and technical for a general audience.

There are three pillars of experimental evidence for the Big Bang model: Hubble expansion of the universe, relative abundance of light nuclei due to nucleosynthesis in the first few minutes, and the Cosmic Microwave Background.

Ma showed Hubble's original data from 1929 for redshift versus distance of galaxies. There was a lot of noise in the data. Nevertheless, Hubble was right.

Big Bang Nucleosynthesis

This was first proposed in 1948 by Ralph Alpher and George Gamow. (Hans Bethe was an honorary author of the paper as a joke so that the author list would sound like the first three letters of the Greek alphabet. Gamow had a mischievous sense of humour.)

The chain of nuclear reactions that will produce the lightest elements and isotopes is shown below.

Because the binding energy of 4He is so large, it could have only been formed at an extremely high temperature of about 10^10 K. (Or is the issue activation energy for formation, not binding energy?)

Detailed calculations using parameters from terrestrial nuclear physics give the observed relative abundances of the elements. In particular, the universe is 74% hydrogen and 24 per cent helium.

The astrophysicist's periodic table showing the origin of the different chemical elements is rather cute.


Giving credit to George Gamow

Gamow, who died in 1968, made impressive contributions to theoretical physics. His Wikipedia page is worth reading. He claimed that he predicted the Cosmic Microwave Background in the late 1940s and did not receive sufficient credit when it was discovered in 1964. The 2019 Nobel Prize citation for James Peebles also minimises Gamow's early contributions. Whether this is fair or not can be debated.

Anisotropies in the Cosmic Microwave Background.

The past two decades have seen amazing advances in precision measurements of these anisotropies. The radiation is isotropic to one part in 25000, with a temperature of 2.72548±0.00057 K.

Measurements of the anisotropies have allowed precise determinations of key cosmological parameters by fitting theoretical predictions to the data shown below from the 2018 Planck collaboration. Different peaks have different physical origins. 

The level of precision in the data is truly amazing.


The solid line is a fit to theory involving six parameters. What would Enrico Fermi say? This is not "making the tale of an elephant wiggle" because the fit parameters are all consistent with independent determination of the cosmological parameters from Hubble expansion and the relative abundance of the light elements.

Aside. The paper from the Planck 2 collaboration has been cited 19000 times, but has almost 200 authors. How does one use that information in evaluating individual authors in job and promotion applications? How are they to be compared to a single-author paper with 100 citations or a five-author paper with 500 citations?

Is this a golden age for cosmology? 

Yes, in terms of precision measurements. 

On the theoretical side, the golden age may have passed. It is not clear that new concepts or theories will emerge. The outstanding questions are:

What is the nature and origin of dark matter? of dark energy? 

Why is the cosmological constant so small? Why is it so fine-tuned?

Can the validity of inflation be pinned down?

Does quantum gravity matter?

A lot of smart people have spent decades on these problems and made little progress. That fact does not preclude the possibility of a theoretical breakthrough. However, it does not make me optimistic. I hope I am wrong.

Thursday, February 5, 2026

The legacy of 40 years of cuprate superconductivity

In February 1986, Bednorz and Müller made a stunning discovery: superconductivity at a temperature of 35 K in a doped copper oxide (cuprate). Arguably, this discovery changed condensed matter physics. In April 1986, they submitted their results to Z. Phys. B. Only nineteen months later, they were awarded the Nobel Prize in Physics, the shortest time ever between a discovery and the award. A nice and short review of the history is here.

One measure of my estimate of the influence of this discovery is that it received about 5 pages of coverage in my Condensed Matter Physics: A Very Short Introduction. (See Chapter 5, Adventures in Flatland).

How things have developed over the past forty years, for better and worse, may be representative of how science advances: discovery by serendipity, hype about applications, unexpected secondary benefits, foundational questions, new concepts, unification, and incremental advances.

Hype about technological applications

On March 20, 1987, The New York Times had a front-page article, DISCOVERIES BRING A 'WOODSTOCK' FOR PHYSICS, by James Gleick. This followed the 1987 APS March meeting. It began 

"Physicists from three continents converged on the New York Hilton for a hastily scheduled special conference on a string of discoveries that seem certain to produce a rapid cascade of commercial applications in electricity, magnetism and electronics.There are many things we know and understand that we did not when they were first discovered."

This has largely been unfulfilled. There are a few niche applications, but cuprates are not used in electricity distribution or even in the superconducting magnets in hospital MRI machines, which are probably the main commercial application of superconductors. One of the significant obstacles is that it is hard to make wires from these materials, as they are ceramics. This is an example of the common gap between research laboratory science and commercially viable technology.

After 40 years, do we have a successful theory?

It depends on who you ask. But I would say there is a lot we do understand.

We have a phenomenological theory for all the macroscopic phenomena associated with the superconducting state: Ginzburg-Landau theory!

Properties of the superconducting state are well-described by a BCS wavefunction with a d-wave order parameter and the associated Bogoliubov quasiparticles. [This is somewhat puzzling, as in the metallic state quasi-particles are not well defined].

Although not everyone agrees, I think it is fair to say that the essential physics is in a one-band Hubbard model, and the key physics is:

strong electronic correlations,

a doped antiferromagnetic Mott insulator,

d-wave pairing that is "mediated"/caused from some mixture/variant of antiferromagnetic spin fluctuations or RVB spin singlets,.....

We certainly don't understand the cuprates at the same level as elemental superconductors. But we do understand the essential physics.

What is harder to describe and understand are the states adjacent to the superconducting state in the phase diagram: the pseudogap state and the strange metal.


Strongly correlated electron materials became a large, vibrant and unified field

Before 1986, there were small, disconnected communities intermittently interested in transition metal oxides, rare earths, Kondo impurities, Mott metal-insulator transitions, organic superconductors, heavy fermions, and quantum antiferromagnets.

The discovery of the cuprates brought together these communities as they found common interests, challenges, questions, concepts, and techniques.

The discovery of superconductivity in strontium ruthenate, alkali fullerides, iron pnictides and chalcogenides, twisted bilayer graphene and more cuprates, organic charge-transfer salts, and heavy fermions has shown how rich these systems are. The challenge is to understand the similarities and differences between these chemically and structurally diverse systems. In many of them, superconductivity is proximate to a Mott insulating state.

The unity and excitement were probably stimulated and enhanced by the activities and ideas of high-profile theorists such as Anderson, Schrieffer, Scalapino, Pines, Rice, and Varma. On the other hand, their acrimonious disagreements probably did not help.

Secondary theoretical benefits

The things I list below were not new ideas when the cuprate discovery happened. However, interest in the cuprates led them to become major research themes and ideas.

Importance of phase diagrams, including as a function of interaction parameters in toy models

Highlighting the limitations of electronic structure methods based on Density Functional Theory with approximate Exchange-Correlation functionals (i.e., anything computational). In the presence of strong correlations, DFT methods have spectacular failures. For example, predicting a metallic state instead of the Mott insulator.

Low dimensionality leads to qualitatively different behaviour, including the possibility of new types of order and quasiparticles. This is most dramatic in one dimension, where one has Luttinger liquids and spin-charge separation.

Spin liquids. Landau was wrong. Spontaneous symmetry breaking does not always occur in antiferromagnets.

Non-Fermi liquids. Landau was wrong. Not all metals are Fermi liquids.

Quantum criticality. Although this is a robust concept for certain toy models, whether it is relevant to the cuprates remains contentious.

Systematic improvements in approximation schemes and numerical techniques - exact diagonalisation, DMRG, DMFT, quantum Monte Carlo,...

Emergence. Chemical complexity and strong interactions can lead to new states of matter.

Secondary experimental benefits

Better probes. The desire to characterise the cuprates helped drive significant improvements in the resolution of ARPES (Angle-Resolved PhotoEmission Spectroscopy), STM (Scanning Tunnelling Microscopy), and inelastic neutron scattering. These advances have born fruit in the study of a wide range of other materials, beyond the cuprates.

Growth of single crystals. The early days of the cuprates produced a lot of junk experimental results because of the poor quality of the samples produced by "shake and bake". However, the involvement of solid-state chemists has improved things. The techniques have also led to the production of single crystals for a wide range of strongly correlated materials.

Why is there so little research on cuprates today?

Today, there is little research directly on cuprates, both theoretically and experimentally. It is hard to get funding to work on them, even though there is a lot we don't understand really well.

This is because of the problem of fashion in science. The low-lying fruit has been picked. There is a continuous new stream of materials being discovered with exotic properties, the latest being twisted bilayer van der Waals compounds.

Monday, January 26, 2026

What is absolute temperature?

The concept and reality of absolute temperature is amazing. It tells us something fundamental about the universe, including physical limits as to what is possible. The existence of absolute temperature is intimately connected with the existence of entropy as a thermodynamic state function. It also hints at the underlying quantum nature of reality.

Aside: Unfortunately, the Wikipedia page on this topic is mediocre and garbled. For example, it continues the myth that temperature is related to kinetic energy.

The zeroth law of thermodynamics allows the definition of empirical temperature. It is an equilibrium state variable that indicates whether a thermodynamic system will remain in the same state upon being brought into thermal contact with another system. Thermometers are systems with a single state variable.

Absolute temperature is a specific temperature scale that is central to thermodynamics and statistical mechanics. 

There are several equivalent definitions of absolute temperature. They start at different points. Except for the first one, the others show that the existence of absolute temperature is intimately connected to the second law and to entropy being an extensive quantity.

This is nicely discussed by Zemansky in chapter 8 of his text Heat and Thermodynamics, Fifth Edition (1968). [This was the text for my second year undergrad thermo course at ANU in 1980. At the time, I did not fully appreciate how profound some of it is. I just enjoyed all the multivariable calculus.] 

1. Ideal gas thermometers.

Consider a fixed mass of ideal gas whose volume is fixed. An ideal gas is defined as any gas at a temperature and pressure much larger than the critical temperature and pressure for the gas-liquid transition. Suppose the system is cooled and heated, and the pressure is measured as a function of the temperature measured by a separate thermometer calibrated by the Celsius scale. The pressure versus temperature curve is a straight line. If this line is extrapolated to zero pressure, this occurs at -273.15 degrees Celsius. The straight line has different slopes for different gases, but they all intercept the x-axis at the same point. Alternatively, one can take the pressure as fixed and measure the volume of the gas versus temperature. Extrapolation to zero volume also occurs at -273.15 degrees. 

This suggests that something special is happening at -273.15 degrees Celsius. One can define a special temperature scale where this temperature is zero. Historically, this was the beginning of the concept of absolute temperature.

However, we should be cautious about this approach. This is just an extrapolation and does not allow for the fact that ideal gases are rather special or that some very different physics might kick in below the critical temperature of helium.

2. The efficiency of Carnot cycles. 

This follows Zemansky (page 208). Consider a Carnot cycle abcda, where b to c and d to a are isothermal processes, between the same two reversible adiabatic surfaces, and involve heat transfers Q and Q_3, respectively. The absolute temperature scale T is defined by 

T/T_3 = Q/Q_3

with T_3 = 273.16, when the process d to a occurs at the triple point of water.

3. Integrating factor for heat

Heat is not a state property. It depends on processes. The first law says Delta Q = Delta U + P Delta V. If we consider a quasi-static process and integrate the heat transfer along the path taken (in state space), the result may depend on the path taken. On the other hand, if one integrates dQ/T, one finds that the result is independent of the path. This can then be used to define a new state variable, the entropy. 

The brief discussion above misses some subtle and profound features that only became clear in the 1960s following the work of Pippard, Turner, Landsberg, and Sears, which was inspired by an axiomatic approach to thermodynamics developed by Caratheodory.

Zemansky states

It is an extraordinary circumstance that not only does an integrating factor exist for the dQ of any system, but this integrating factor is a function of temperature only and is the same function for all systems! This universal character enables us to define an absolute temperature.

4. Applying the second law to a composite system

This treatment follows Schroeder, Thermal Physics (Section 3.1)

Schroeder defines entropy in terms of a multiplicity of states. However, I prefer to define entropy as the state function which tells us whether or not two states are accessible from one another by an adiabatic process. There are multiple possible versions of this empirical entropy state function, but let's choose one that is extensive, i.e., scales with the mass and volume of the system.

Consider an adiabatically isolated system containing an internal partition through which the conduction of heat can occur. Denote the two parts of the system by A and B. The entropy of each part can be written as a function U of its internal energy. 

The total entropy of the system can be written 

S = S_A (U_A) + S_B (U_B)

If the system is in thermal equilibrium, by the second law, the entropy of the whole system must be a minimum as a function of U_A and U_B.

Now, dU_A = - dU_B as the composite system is adiabatically isolated. Hence, we have.


The left-hand (right-hand) side of the equation only depends on the properties of system A (B). Thus, it is an intensive state variable which determines whether the system will be in equilibrium with another system. Hence, by the zeroth law, it defines a temperature scale.

T is the absolute temperature.

Friday, January 16, 2026

Responding to scientific uncertainty

Science provides an impressive path to certainty in some areas, particularly in physics. However, as scientists seek to describe increasingly complex entities, moving from chemistry to biology, and then to humans and societies, the level of uncertainty increases.

One observes a wide range of responses to scientific knowledge being uncertain. Here are a few.

Denial. Science is about facts and absolute truth. There really isn’t a problem. We should just trust the scientists.

Minimisation. There is some uncertainty, but it isn’t anything to be concerned about. Some scientists will also minimise any uncertainty about their own research. This may occur because of career ambition. Others will minimise public discussion of uncertainty to try and avoid promoting the science scepticism discussed below.

Optimistic perseverance. The uncertainty is openly acknowledged. Some of the uncertainty does not matter for what we need to know. Other uncertainties can be reduced by further scientific work, such by more precise measurements with new instruments or by developing more sophisticated theories.

Total scepticism. There is a suspicion about the validity of most scientific knowledge, particularly that which is perceived to have philosophical, religious, or political implications.

Suspicion about science

In spite of the success of science at describing the material world and leading to powerful and useful technologies, there is much public suspicion of science. On the one hand, this is understandable given that science has led to technologies with undesirable health, environmental and social consequences. Some scientists, governments and companies have lied about these consequences and hidden them from the public. Human subjects have been abused in medical experiments. Drugs that were claimed to be effective and safe turned out to be ineffective or have undesirable side effects. Science has been used for ideological purposes. Sometimes scientists have faked results to advance their own careers. However, these failures should not undermine our trust in reliable scientific knowledge. Distinctions should be made between the bodies of knowledge, the applications of that knowledge, and the actions of institutions. I now discuss several common claims in public discussion that are used to justify scepticism of scientific knowledge.

Science is always changing. 

One day, scientists tell you that chocolate is good for your health, and the next year they say it is bad for you. And that is just the start. Then there are eggs, wine, running marathons, and cheese. They just can’t make up their mind. So why should we trust them? At one time, they believed in phlogiston and the aether. Now they say they don’t exist. Aristotle was replaced by Newton, who was replaced by Einstein. So why believe in human-induced climate change, biological evolution, vaccines, the Big Bang theory, or Einstein’s theories?

It is true that scientific knowledge does develop and change over time. However, today we have incredibly detailed observations and theories in physics, astronomy, chemistry, biology, and geology. Any future changes will be relatively minor because they will have to be consistent with all the knowledge we have now. Furthermore, when theories change, such as when Einstein superseded Newton, they don’t show that the old theory was completely wrong, but rather that it applied in a limited domain. For example, Newton’s theories of motion and gravity are extremely reliable when it comes to objects that are much larger than atoms, less dense than a black hole, and are moving at speeds less than about 10,000 kilometres per second. This is why engineers spend years learning Newton’s theories, not Einstein’s. If you want to build a good bridge or a rocket, Newton is good enough. He is not wrong.

Update. (Jan. 19). I just discovered that the NY Times had a recent op-ed Science Keeps Changing. So Why Should We Trust It?

“Well, that’s just a theory.” 

In popular debate, such a refrain may be applied to the theory of biological evolution, the Big Bang theory in cosmology, or human-induced climate change. The claimant usually wants to dismiss a particular theory as just idle speculation. Here, the term “theory” is used in the same sense as everyday speculations, such as “I have a theory as to why the president resigned,” or “I have a theory about why my computer is running so slowly.” These are just stories that sound somewhat plausible. In contrast, scientific theories in physics, such as quantum theory and Einstein’s theories of relativity, have precisely defined mathematical formulations that have been checked for logical consistency, made specific predictions, and tested to great precision in experiments. They are not “just theories.” For example, for the Big Bang theory about the beginning of the universe and Darwin’s theory of biological evolution and diversity, there are many independent lines of evidence that are consistent with each theory.  

Scientists cannot be trusted. 

They are not committed to the truth, but rather to their own interests and agendas, related to their careers, politics, and religion. They close ranks and support the status quo of current scientific “dogma”, rather than being open to original thinkers who critique it and propose alternative theories. They don’t want to lose their well-paid jobs and lucrative grants. 

On the one hand, scientists can be conservative and resistant to new ideas. On the other hand, there are significant career incentives to overturn existing knowledge and have your radical new theory accepted. That is how some scientists become famous and win Nobel Prizes. The reasons it does not happen very often are not necessarily for social or ideological reasons. Many of the theories we have today can explain an awful lot. It requires a lot of evidence, carefully acquired and checked, to convince people that those theories need to be modified, let alone abandoned. This may take decades. But it does happen. An example is the Big Bang theory of the universe, whose acceptance was initially resisted because it went against the prevailing view that the universe did not have a beginning. In biology, the discovery in 1970 of the enzyme reverse transcriptase went against a popular version of the “Central dogma” of molecular biology that DNA was always converted to RNA and not the reverse. That discovery led to a Nobel Prize.

I don’t trust scientists. I will do my own research. There is lots of good material from unbiased sources on the internet.

The internet provides a range of information and perspectives on practically any issue imaginable, including science. The material is particularly vast and controversial on biological evolution, the beginning of the universe, fundamental physics, the age of the earth, climate change, and medicine. Since the covid-19 pandemic, scepticism of the effectiveness and safety of vaccines has increased. 

Ivermectin is a drug that was developed as a treatment for parasite worms. Its incredible success was recognised by the award of the 2015 Nobel Prize in Physiology or Medicine to William Campbell and Satoshi Omura, who discovered the drug. During the pandemic, high-profile politicians and social media influencers promoted ivermectin as a treatment for covid-19, even after systematic medical studies showed it was ineffective. Recently, it has gained a reputation as a “miracle” drug that can even cure cancer, but this is being suppressed by the medical establishment. All clinical trials have shown the drug is ineffective for human ailments, beyond deworming. Nevertheless, there are groups on social media with hundreds of thousands of members that discuss the conspiracy, how to get the drug, and the experiences of participants using it to treat a wide range of ailments. Danny Lemoi, a founder of one of the largest groups, died in 2023 after taking massive daily doses of the drug for several years to treat a heart condition. Afterwards, one member of the group wrote “No one can convince me that he died because of ivermectin. He ultimately died because of our failed western medicine which only cares about profits and not the cure.”

Fans of ivermectin claim that they are escaping the biases and vested interests of the medical establishment and Big Pharma as they pursue the truth. However, they are not escaping bias and vested interests. Successful social influencers build their reputations and million-dollar incomes from promoting scepticism. If there is no conspiracy, just scientific uncertainty and occasional incompetence and malpractice, their following collapses. Populist politicians build their careers on criticism of and stoking resentment towards elites, such as the medical establishment. The authority of the medical establishment is replaced with the authority of the popular opinion of a group of people whose views are shaped by social media algorithms, intuition, and anecdotal experience.

My purpose in giving the example of Ivermectin is not to start a detailed critique of science scepticism. Rather, it is to illustrate the role that the interplay of trust, authority, and tradition plays in how we determine what is true and what to act on. There are two competing traditions here: the populism of alternative medicine and the elitism of professional medicine. Each has its own sources of authority. In the end, it boils down to who we trust. We do not have the time, energy, resources or inclination to check the veracity of every single piece of information we have access to. We take shortcuts. This is what tradition does for us, for better and worse. Thus, we cannot escape tradition. We are all swimming in traditions, many of which are in conflict with one another. The question is whether we are aware of it and what we do with that awareness.

Friday, January 9, 2026

What is temperature?

Temperature is NOT the average kinetic energy.

When I taught thermodynamics to second year undergraduates one of the preconceived notions that was hard to dislodge from students was that temperature IS a measure of the average kinetic energy of the atoms or molecules in a system.

First, I will give the merits of this view and then explain why it is problematic.

A profound and important insight from Maxwell's kinetic theory of ideal gases was that the average kinetic energy of the atoms/molecules in the gas is related to the absolute temperature defined by Kelvin. This result was important because it provided a microscopic basis for Joule's discovery of the mechanical equivalence of heat.

The result does not just hold for an ideal gas. Classical statistical mechanics can be used to show that for any system of interacting particles, the average kinetic energy of each particle is 3/2 kT. The proof proceeds in the same manner as the equipartition theorem. In the partition function, the integral over momentum factorises and can be evaluated exactly as it is Gaussian integral.

However, this simple relationship between temperature and kinetic energy does not hold for quantum systems. Consider the case of a harmonic oscillator, with frequency omega. By the virial theorem, the average kinetic energy is equal to the average potential energy. Thus, the average kinetic energy is half of the internal energy U(T), which is a universal function f(T/omega). Thus, if we compare two oscillators with different frequencies, at the same temperature, they will have different kinetic energies.

This problem is not just some quantum exotica that is only relevant at extremely low temperatures. Most solids are "quantum" at room temperature because they have a Debye temperature in the range of 200-1000 K.

Temperature is a macroscopic variable, not a microscopic one. It should be defined in terms of the zeroth law of thermodynamics.

Temperature is a state variable associated with a system in thermal equilibrium. It tells us whether that system will be in thermal equilibrium with another system. Consider two separated systems with temperatures T1 and T2. If they are brought into thermal contact, their states will not change if and only if T1=T2.

A thermometer is a system with a single state variable. The value of that variable is an empirical temperature.

Aside. This view of temperature was used by Planck in his book, Treatise on Thermodynamics, first published in 1905.

I am thankful to my undergraduate mentor, Hans Buchdahl for teaching me that thermodynamics is conceptually coherent and beautiful. 

This discussion illustrates that temperature is an emergent property. It is a property of a macroscopic system that the parts of the system do not have. The temperature is independent of the microscopic composition of the system or its history. This universality is a characteristic of many emergent properties.

In another post, I hope to explain what the absolute temperature, first introduced by Kelvin, is.

Monday, January 5, 2026

Maxwell's demon and the history of the second law of thermodynamics

I recently reread Warmth Disperses and Time Passes: The History of Heat by Hans Christian von Baeyer

As a popular book, it provides a beautiful and enthralling account of the discovery of the first and second laws of thermodynamics. The book is a great companion to teaching and learning thermodynamics and statistical mechanics. The narrative is unified by the puzzle of Maxwell's demon.

Aside: The book was first published in 1998 with the title Maxwell's Demon. My guess is that the publisher changed the title because most people have probably not heard of the demon, unlike Schrodinger's cat.

Baeyer captures both the wonder of the subject and the fascinating story of how the science of thermodynamics developed. He describes quirky personalities and illustrates how science proceeds with a mixture of brilliant insights, clever experiments, false leads, and forgotten discoveries. It is easy and compelling reading.

I appreciated that there is a lack of hype, in contrast to too many popular science books.

The book is enhanced by showing that the story is not over. Many reports of the demise of the demon have been premature. The penultimate chapter discusses Zurek's definition of entropy in terms of algorithmic randomness. The last chapter considers molecular motors, such as kinesin, which can be viewed as ratchets driven by thermal noise.

Physical insights

The first and second laws tell us something about the fundamental nature of the universe. Although they are macroscopic and may have some (debatable) microscopic justification,  they can be viewed as fundamental.

Central to the development of the first law was the notion of the mechanical equivalent of heat.

There are three rather different ways to formulate the second law: a Carnot cycle represents an engine of optimal efficiency, heat never passes from a cold to a hot body, and the arrow of time. It is profound that these formulations are equivalent and not something that was anticipated. We should marvel at this.

Entropy can be viewed as the absence of information. Consequently, the second law can be viewed as statistical.

Things I want to understand

A good book stimulates us to want to engage more with its subject. Some things I want to understand are the entropy of the initial state of the universe, Boltzmann's H theorem, Feynman's ratchet, Shannon's information theory, molecular motors, Zurek's definition of entropy, and Gerald Holton's book, Thematic origins of scientific thought.

A recent tutorial is A Friendly Guide to Exorcising Maxwell’s Demon, by A. de Oliveira Junior, Jonatan Bohr Brask, and Rafael Chaves

Beautiful things missed

As a popular book, I think the length and scope of topics are right. Nevertheless, in a longer book, here are some things I would enjoy reading about: the zeroth and third laws, the contributions of Gibbs, the ergodic hypothesis, Brownian motion and evidence for atoms, the role of thermodynamics (and statistical mechanics) in the development of quantum theory (blackbody radiation, Einstein solid, identical particle statistics, and the Sackur-Tetrode equation) and perhaps phase transitions.

Two quibbles

von Baeyer has a somewhat reductionist perspective that the true nature of thermodynamics was revealed by the microscopic descriptions of Maxwell and Boltzmann.

I will write separate posts on why I am not comfortable with the following two statements.

Temperature IS the average kinetic energy of molecules.

Entropy was mysterious until Boltzmann's definition S=k ln W. 

What is condensed matter physics?

 Every day we encounter a diversity of materials: liquids, glass, ceramics, metals, crystals, magnets, plastics, semiconductors, foams, … Th...