Tuesday, September 3, 2024

Autobiography of John Goodenough (1922-2023)

 John Goodenough was an amazing scientist. He made important contributions to our understanding of strongly correlated electron materials, magnetism, solid state chemistry, and materials science and engineering. He developed materials that are widely used in computer RAMs and rechargeable lithium batteries. He kept working in the laboratory and writing papers into his early 90s. Goodenough was awarded the Nobel Prize in Chemistry in 2019. Here is his Nobel Lecture, including text, slides, and video.

In 2008 he published Witness to Grace, a brief autobiography that chronicles his personal, scientific, and spiritual journeys. It is a fascinating story. The book is now out of print and the publisher is out of business. I have scanned a copy. You can download it here. I thank David Purdy for bringing to my attention the need to preserve the book.


Tuesday, August 27, 2024

What symmetries distinguish liquids, crystals, glasses, and isotropic solids?

 One of the most important ideas in condensed matter physics is that different states of matter are associated with different symmetries. These different symmetries result in different types of elementary excitations such as the Goldstone bosons associated with continuous symmetry breaking. The symmetries of the low-lying excited states reflect the symmetries of the ground state.

For example, consider the transition from a liquid to a cubic crystal. The continuous rotational and translational symmetry of the liquid is broken to the discrete rotational and translational symmetry of the crystal. Long-wavelength sound waves reflect these changes in symmetry. In the crystal, there are three distinct sound waves: one longitudinal and two shear modes. In contrast, in the liquid, there are only longitudinal modes. 

An isotropic solid, such as studied in elasticity theory, supports two types of distortions: compression and shear. Consequently, there are three types of sound waves (longitudinal and transverse phonons. The latter can have two different polarisations). The isotropic solid has continuous, not discrete, rotational and translational symmetries. A glass is an example.

This leads to a fundamental question:

What is the difference between liquids and solids at the level of fundamental symmetries?

In different words, what is the order parameter for the liquid-solid transition? A possible answer is the shear modulus G, which vanishes in the liquid state.

A related question is: What is the fate of the transverse phonons upon transitioning from the solid state to the liquid state?

I would have thought that these questions would have been settled decades ago. However, they have not. Just two years ago, Physical Review E published a 22 page article that aims to address the questions above.

Deformations, relaxation, and broken symmetries in liquids, solids, and glasses: A unified topological field theory

Matteo Baggioli, Michael Landry, and Alessio Zaccone


The paper immediately drew a Comment claiming the paper
"contradicts the known hydrodynamic theory of classical liquids." The authors have a Reply.

I do not have the expertise to give insight on the subtle technical issues in this debate. My only comment is that it is amazing how we are struggling to answer such basic questions.

I thank Jean-Noel Fuchs for getting me interested in these subtle questions. This happened when he kindly pointed out an error in Condensed Matter Physics: A Very Short Introduction. On page 40, I erroneously stated that shear sound waves exist in a liquid. This was part of a confused discussion about how sound waves can be used to distinguish different states of matter.  I have drafted a corrected paragraph and inserted it in my post listing the errors in my book.

I welcome any comments about the issues discussed above.

Thursday, August 22, 2024

My mental health update

I have struggled with my mental health on and off since the time of my Ph.D. studies. Several readers have commented that has been helpful for them to hear my story. Here I give a small update on both my health and some recent reading.

I have been thinking about the issue more because I have been invited to give a talk in October for a research centre at UQ, as part of Mental Health Week. I may adapt a talk that I gave for a school colloquium at UQ six years ago. I welcome suggestions for things people think I should talk about.

My mental health is the best it has been for almost a decade. There are probably many reasons for this: retirement, managing stress, no international travel, being connected to a church community, and practising the basics (diet, exercise, less screen time, less caffeine, ...), ...

Until a year ago, I believed I would be on antidepressants for the rest of my life. But my doctor told me we should explore my getting off antidepressants. It is now the view of the medical establishment that there are too many people on them who do not need to be, there can be long-term complications, and that the longer a patient is on them the harder it is to get off them. Over the past 2 years, The Economist had helpful articles along these lines (see below).

In April we agreed that we would start the experiment of reducing my dose, following the now standard practice of slowing reducing the dose every three weeks. He warned me to look for side effects, such as random brain zaps. There were no side effects. I got to zero dosage a month ago.

Unfortunately, I am now experiencing one side effect which I have now learned is not uncommon: uncontrollable sobbing. The first instance was July 21 when I learned that Biden was not going to run again for President. The fact that this triggered ten minutes of sobbing shows there is something not quite right with my brain chemistry!

I had several other incidents with my family. The tears are out of proportion to the significance of the event that triggers them. Sometimes I choke up when talking to people I care about or on an issue that concerns me.

I had an appointment with my doctor this week and we agreed that for now, we would stay the course, not resume the medication, and monitor the situation.

How to make better use of antidepressants: Identify those who really need them, and wean other people off them

(The Economist, October 19, 2022)

Antidepressants are over-prescribed, but genuinely help some patients: In around 15% of cases, they offer large benefits

(The Economist, January 20, 2023)

The graphs above are amazing. They show several striking things.

1. There is a massive placebo effect for antidepressants. This is shown by the two coloured curves being almost identical.
2. There is a massive variation between patients with regard to how effective the drugs are. This is shown by the very broad distribution. It reminds me of journal impact factors: the distribution is so broad that discussions about the mean are meaningless.

Antidepressants can cause withdrawal symptoms – here’s what you need to know

(The Conversation, June 23, 2023)

Psychiatry’s Incurable Hubris: The biology of mental illness is still a mystery, but practitioners don’t want to admit it.

(The Atlantic, April 2019).

Friday, August 16, 2024

Do arrows of explanation point down or up?

The figure above shows the stratification of objects that interest physicists. As one goes down the chain length and time scales get smaller and energy scales get larger.
A reductionist seeks to explain the objects at each strata in terms of the objects that occur at the next lower strata.

In 1987 Steven Weinberg gave a talk at the University of Cambridge at the Tercentenary Celebration of Newton's Principia.


Part of the talk is about Weinberg's testimony to a US Congressional Committee making the case for the construction of the SSC (Superconducting Super Collider). Phil Anderson spoke against the SSC.

Weinberg argued that the SSC should be built because particle physics is "in some sense more fundamental than other areas of physics." He claims that this is because "the arrows of explanation point down", as in the diagram shown above.

A contrasting perspective is that of Andrew Steane. His book, Science and Humanity, contains the figure below.

In his picture of the explanatory relationship between physics, chemistry, and biology, Steane draws arrows pointing in both directions. The up arrow is denoted “supports [allows and physically embodies the expression of]” and the down arrow is denoted “enarches [exhibits the structures and behaviours that make sense in their own terms and are possible within the framework of].”

Weinberg's article is worth reading in full. It has many insights about science and physics worth considering, including the relationship between emergence and reductionism.

Aside: It is also reproduced in his book of essays, Facing Up: Science and Its Cultural Adversaries, published in 2001.

Friday, July 26, 2024

Emergence, structuralism, realism, and quarks

"Structuralism as an influential intellectual movement of the twentieth century has been advocated by Bertrand Russell, Rudolf Carnap, Nicholas Bourbaki, Noam Chomsky, Talcott Parsons, Claude Levi-Strauss, Jean Piaget, Louis Althusser, and Bas van Fraassen, among many others, and developed in various disciplines such as linguistics, mathematics, psychology, anthropology, sociology, and philosophy." 

In different words, structuralism and post-structualism have been and are still a really big deal in the humanities and social sciences. Structuralism is central to the rise and fall of a multitude of academic fashions, careers and reputations.  

"As a method of enquiry, it takes a structure as a whole rather than its elements as the major or even the only legitimate subject for investigations. Here, a structure is defined either as a system of stable relations among a set of elements, or as a self-regulated whole under transformations, depending on the specific subject under consideration. The structuralist maintains that the character or even the reality of a whole is mainly determined by its structuring laws, and cannot be reduced to its parts; rather, the existence and essence of a part in the whole can only be defined through its place in the whole and its relations with other parts."

In a sense, structuralism favours emergence over reductionism. But, note some of the strong exclusivist language highlighted in bold in the quote above. Structuralism seems to be an overreaction to extreme reductionism. 

Condensed matter physics has something concrete to contribute to these debates. Consider the case of Ising models defined on a range of lattices, as I discussed in a previous post. We do not have an exclusive interest in the whole system or in the parts of the system. Rather, we want to know the relationship between macroscopic properties [different ordered states], mesoscopic properties [domains, long-range correlations, networks], and microscopic properties [the individual spins and their local interactions].

That is the main point of this post. But for more context, keep reading.

The quotations above are taken from a book by Tian Yu Cao. 

From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism

Cao is interested in a broad range of philosophical questions related to QCD, such as "If quarks cannot be observed in isolation should they be considered to be real?"

He continues:

In the epistemically interesting cases involving unobservable entities, the structuralist usually argues that it is only the structure and the structural relations of its elements, rather than the elements themselves (properties or entities with properties) that are empirically accessible to us. It is obvious that such an anti-reductionist holistic stance has lent some support to phenomenalism

However, as an effort to combat compartmentalization, which urge is particularly strong in mathematics, linguistics, and anthropology, the structuralist also tries to uncover the unity among various appearances, in addition to invariance or stable correlation under transformations, which can help discover the deep reality embodied in deep structures. Furthermore, if we accept the attribution of reality to structures, then the antirealist implications of the underdetermination thesis [which claims that since evidence cannot uniquely determine (or, worse, can even support conflicting) theoretical claims about certain unobservable entities, no theoretical entities should be taken as representation of reality], is somewhat neutralized, because then we can talk about the realism of structures, or the reality of the structural features of unobservable entities exhibited in evidence, although we cannot directly talk about the reality of the entities themselves that are engaged in the structural relations. In fact, this realist implication of structuralism was one of the starting points of current interests in structural realism.

Monday, July 22, 2024

Clarity about the relationship of emergence, complexity, predictability, and universality

Emergence means different things to different people. Except, that practically everyone likes it! Or at least, likes using the word. Terms associated with emergence include novelty, unpredictability, universality, stratification, and self-organisation. We need to be clearer about what we mean by each of these terms and how they are related or unrelated. Significant progress is reported in a recent preprint.

Software in the natural world: A computational approach to hierarchical emergence

Fernando E. Rosas, Bernhard C. Geiger, Andrea I Luppi, Anil K. Seth, Daniel Polani, Michael Gastpar, Pedro A.M. Mediano

This preprint is the subject of a nice article in Quanta Magazine.

The New Math of How Large-Scale Order Emerges by Philip Ball

Ball defines emergence in terms of unpredictability. He states: 

"Loosely, the behavior of a complex system might be considered emergent if it can’t be predicted from the properties of the parts alone."

He describes the work of Rosas et al. as follows, 

"A complex system exhibits emergence, according to the new framework, by organizing itself into a hierarchy of levels that each operate independently of the details of the lower levels."

This is defining emergence in terms of universality. Rosas et al. use an analogy with software, which runs independently of the details of the hardware of the computer and does not depend on microscopic details such as electron dynamics.

There are three types of closure associated with emergence: informational, causal, and computational.

Informational closure means that to predict the dynamics of the system at the macroscale one does not need any additional  information from the microscale.

Equilibrium thermodynamics is a nice example. 

Causal closure means that the system can be controlled at the macroscale without any knowledge of lower-level information.

"Interventions we make at the macro level, such as changing the software code by typing on the keyboard, are not made more reliable by trying to alter individual electron trajectories."

"...we can use macroscopic variables like pressure and viscosity to talk about (and control) fluid flow, and knowing the positions and trajectories of individual molecules doesn’t add useful information for those purposes. And we can describe the market economy by considering companies as single entities, ignoring any details about the individuals that constitute them."

Computational closure is a more technical concept. 

"a conceptual device called the ε-(epsilon) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one. It’s a bit like an elevator, said Rosas; an input to the machine, like pressing a button, will cause the machine to transition to a different state (floor) in a deterministic way that depends on its past history — namely, its current floor, whether it’s going up or down and which other buttons were pressed already. Of course an elevator has myriad component parts, but you don’t need to think about them. Likewise, an ε-machine is an optimal way to represent how unspecified interactions between component parts “compute” — or, one might say, cause — the machine’s future state."

Aside: epsilon-machines featured significantly in my previous post about What is a complex system? 

"Computational mechanics allows the web of interactions between a complex system’s components to be reduced to the simplest description, called its causal state."

"...for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, in the researchers’ terminology, “strongly lumpable.”"

In some sense, this may be related to the notion of quasiparticles and effective interactions in many-body physics. 

Aside: In 1962, Herbert Simon identified hierarchies as an essential feature of complex systems, both natural and artificial. A key property of a level in the hierarchy is that it is nearly decomposable into smaller units, i.e., it can be viewed as a collection of weakly interacting units. The time required for the evolution of the whole system is significantly decreased due to the hierarchical character. The construction of an artificial complex system, such as a clock, is faster and more reliable if different units are first assembled separately and then the units are brought together into the whole. Simon argues that the reduction in time scales due to modularity is why biological evolution can occur on realistic time scales.  The 1962 article is reprinted in The Sciences of the Artificial.

The paper by Rosas et al. is one of the most important ones I have encountered in the past few years. I am slowly digesting it.

The beauty of the paper that it is mathematically rigorous. All the concepts are precisely defined and the central results are actually theorems. This replaces the vagueness of most discussions of emergence, including by myself.

The paper has helpful figures and considers concrete examples including Ehrenfest's Urn, an Ising model with Glauber dynamics, and a Hopfield neural network model.

I thank Gerard Milburn for bringing the Quanta article to my attention.

Tuesday, July 9, 2024

Basic realities to accept about applying for funding

The advice that follows is directed to young people who are starting out in requesting funding for a project or an annual budget. My advice is based on about thirty years of experience of writing grant applications, reviewing requests, and being involved in making final decisions about applications. My experience has involved national research funding bodies, internal university schemes, charities, and NGOs. Over the years, I have been involved with requests in the range from a few thousand dollars to a few million dollars.

Accept reality
The world is messed up. Systems are broken. They are not the way they should be. Bad decisions are made. Processes are imperfect. I am all for trying to change things. However, when you make a funding application your chances of success are best if you accept the system and engage with it as it is today. Try to change it tomorrow. 

Put yourself in the shoes of the decision-makers.
You may not respect them or think they are particularly competent or well-qualified to make decisions about your funding request. However, put that aside and consider that they may be in an unenviable position. They are working within an imperfect system. They have limited time to read and evaluate a trove of applications, many on topics they do not really understand. They have limited resources to allocate. Most want to evaluate those scarce resources in a fair and equitable manner. In most contexts the ratio of available funds to the total amount of funds requested by all the applicants is somewhere in the ratio of 0.03 to 0.2. This means they need to reject a lot of applications and slim down the budgets of those that are accepted. 

You have to start with small amounts of money and build up.
Trust and success are incremental. You first get a grant for a few thousand dollars. You show that you have used that well to accomplish something. You may have to do that a few times before you get tens of thousands of dollars. You then use that to accomplish something bigger. And so on it goes.
You may think you deserve to receive several $100Ks and jump this process. However, it is highly unlikely to happen. You need to prove yourself.
In different words, any year do not ask for significantly more than you were budgeted last year. 

Every budget line item must be carefully justified.
Is each item really essential for successful completion of the project? We would all like to have a better computer, more technical support, a personal assistant, lots of international travel to exotic locales, release from other responsibilities, ...
But is each item necessary? Is each item consistent with your level of seniority and experience? Or is there a cheaper option? Could someone else fund it?
These issues are not just about good use of resources but also your credibility as someone who is a team player willing to accept institutional realities and limited resources.

The greater the requested budget the greater the scrutiny of the application.
Hence, asking for less money actually increases your chance of success. If your budget is 2 or 3 times the budget of competing applications the funding agency will almost always think that it is better to fund 2 or 3 groups rather than just one. 

Check your attitude.
You should have confidence that what you are doing is important and worth funding. However, that is not the same as making snide comments about competitors, stroking your ego, overselling the significance of what you are doing, or expressing grievances about perceived past slights and criticisms of your work. Exhibiting such attitudes only hurt your chances of success.

Saturday, June 29, 2024

Quantum BS: piling it higher

Hans Bachor recently gave a talk at UQ, Hype and Trust in Quantum Technologies
Trust is a core value in science, trust in data, analysis, concepts, models. This is achieved in physics by open publishing, scientific discourse, testing, repeating experiments, asking critical questions and designing new tests. Fortunately, science is self-correcting in the long term. Hype includes predictions which sensationalise scientific discoveries and exaggerate the future impact. Increasing competition for funding, visibility or job security can make this more attractive. But it also erodes trust in science by the public and investors and has negative social effects on us the researchers. How can we balance them?
I think this problem more broadly reflects the way universities have become to imitate the social context they are imbedded in, rather than being a critique of those societies.

The sociologist Christian Smith eloquently described the emergence of BS in universities, several years ago.

Friday, June 21, 2024

10 key ideas about emergence

Consider a system comprised of many interacting components. 

1. Many different definitions of emergence have been given. I take the defining characteristic of an emergent property of the system is novelty, i.e, the individual components of the system do not have this property.

2. Many other characteristics have been associated with emergence, such as universality, unpredictability, irreducibility, diversity, self-organisation, discontinuities, and singularities. However, it has not been established whether these characteristics are necessary or sufficient for novelty.

3. Emergent properties are ubiquitous across scientific disciplines from physics to biology to sociology to computer science. Emergence is central to many of the biggest scientific challenges today and some of the greatest societal problems.

4. Reality is stratified. A key concept is that of strata or hierarchies. At each level or stratum,  there is a distinct ontology (properties, phenomena, processes, entities, and effective interactions) and epistemology (theories, concepts, models, and methods). This stratification of reality leads to semi-autonomous scientific disciplines and sub-disciplines.

5. A common challenge is understanding the relationship between emergent properties observed at the macroscopic scale (whether in societies or in solids) and what is known about the microscopic scale: the components (whether individual humans or atoms) and their interactions. Often a key (but profound) insight is identifying an emergent mesoscopic scale (i.e., a scale intermediate between the macro- and micro- scales) at which new entities emerge and interact with one another weakly.

6. A key theoretical method is the development and study of effective theories and toy models. Effective theories can describe phenomena at the mesoscopic scale and be used to bridge the microscopic and macroscopic scales. Toy models involve just a few degrees of freedom, interactions, and parameters. Toy models are amenable to analytical and computational analysis and may reveal the minimal requirements for an emergent property to occur. The Ising model is a toy model that elucidates critical phenomena and key characteristics of emergence.

7. Condensed matter physics elucidates many of the key features and challenges of emergence. Unlike brains and economies, condensed states of matter are simple enough to be amenable to detailed and definitive analysis but complex enough to exhibit rich and diverse emergent phenomena.

8. The ideas above about emergence matter for scientific strategy in terms of choosing methodologies, setting priorities, and allocating resources.

9. An emergent perspective that does not privilege the parts or the whole can address contentious issues and fashions in the humanities and social sciences, particularly around structuralism.

10. Emergence is also at the heart of issues in philosophy including the nature of consciousness, truth, reality, and the sciences.

Tuesday, June 11, 2024

The interplay of ecological and evolutionary dynamics: immigration, extinction, and chaos (and DMFT?)

"Ecological and evolutionary dynamics are intrinsically entwined. On short timescales, ecological interactions determine the fate and impact of new mutants, while on longer timescales evolution shapes the entire community."

 Spatiotemporal ecological chaos enables gradual evolutionary diversification without niches or tradeoffs       Aditya Mahadevan, Michael T Pearce, and Daniel S Fisher

Understanding this interplay is "one of the biggest open problems in evolution and ecology."

New experimental techniques for measuring the properties of large microbial ecosystems have stimulated significant theoretical work, including from some with a background in theoretical condensed matter physics. For an excellent accessible introduction see:

Understanding chaos and diversity in complex ecosystems – insights from statistical physics

This is a nice 2.5-page article by Pankaj Mehta at the Journal Club for Condensed Matter. He clearly introduces an important problem in theoretical ecology and evolution and describes how some recent work has provided new insights using techniques adapted from Dynamical Mean-Field Theory, which was originally developed to describe strongly correlated electron systems. 

Here are just a few highlights of the article. It may be better to just read the actual article.

Fifty years ago, Robert May "argued that the more diverse an ecosystem is (roughly defined as the number of species present), the less stable it becomes." He derived this counter-intuitive result using a simple model and results from Random Matrix Theory. This is an example of an emergent property: qualitative difference occurs as a system of interacting parts becomes sufficiently large.

"One major deficiency of May’s argument is that it does not allow for the possibility that complex ecosystems can self organize through immigration and extinction. The simplest model that contains all these processes is the Generalized [to many species] Lotka-Volterra model (GLV)".

"Despite its simplicity, this equation holds many surprises, especially when the number of species is large".

Another case of how simple models can exhibit complex behaviour.

One special case is when the interactions are reciprocal – how species i affects species j is identical to how species j affects species I. "In the presence of non-reciprocity the system can exhibit complex dynamical behavior including chaos." Understanding this case was an open problem until the two papers reviewed by Mehta. For a detailed but pedagogical introduction see:

Les Houches Lectures on Community Ecology: From Niche Theory to Statistical Mechanics, Wenping Cui, Robert Marsland III, Pankaj Mehta

This is relevant to understanding the origin of the fine grained diversity observed in sequencing experiments of microbial ecosystems.

Aside: de Pirey and Bunin "derive analytic expressions for the steady-state abundance distribution and an analogue of the fluctuation-dissipation theorem for chaotic dynamics relating static and dynamics correlation functions."

"Using a DMFT solution, they derive a number of remarkable predictions... in the chaotic system the species fall into two groups: species at high abundances and species at low abundances near the immigration floor. de Pirey and Bunin show that even in the chaotic regime, the number of high abundance species in the ecosystem will always be less than the May stability bound. This result is quite surprising since it suggests that ecosystems self-organize in such a way that the high abundance species still follow May’s diversity bound even when they are chaotic."

Saturday, June 1, 2024

Should Ph.D. students choose to teach?

In Australia, most Ph.D. students are fully funded by scholarships to allow them to focus on their research. This is unlike in the USA where many students must be TAs (teaching assistants) to be paid. 

In most Australian universities, such Ph.D. students can earn extra income by being tutors (same as TAs) for undergraduate courses. Many do, as earning extra money is attractive. Ph.D. students doing teaching saves universities tons of money as it means they don't need to hire and pay permanent academic staff to do this tutoring.

What is my advice to students who have this option? Here are some of the advantages and disadvantages for a Ph.D. student doing such tutoring.

Advantages

You earn additional income.

Having teaching experience listed on your CV may help you get a faculty position at some institutions. For example, in Australia, this seems to be almost a pre-requisite these days. Furthermore, if you can be innovative, and get high student evaluations, that may be viewed favourably. But that really concerns lecturing and not tutoring.

You usually learn a lot from teaching, even lower-level courses.

It can be enjoyable and satisfying. It can provide a break from thinking just about your Ph.D. research.

If you are fortunate enough to eventually get a faculty position this experience will make it easier to handle the formidable challenges of starting out teaching.

It may create some goodwill towards you in your department. You may be seen as a team player and a good departmental citizen.

You may need the money. For example, if you are supporting a family or if you are from a Majority World country and want to send money home to extended family.

Disadvantages

Foremost, it can consume a large amount of time and energy that reduces your research productivity. It reduces your mental space. You may lose research momentum and not complete your Ph.D. on time.

There can be a significant financial opportunity cost. Suppose that doing the teaching delays you completing your Ph.D. by six months. The lost six-month salary will probably be much greater than the amount you earned from doing the teaching.

It can be frustrating to deal with students who are not that interested in learning and are only concerned with grades. Furthermore, there may be the added stress of having to deal with students who make formal complaints about your teaching or their grades.

It may not add a lot to your CV, particularly if your student evaluations are average. They will probably be average or even below average since you are starting out.

If you don't do a stellar job and/or there are a few disgruntled students your reputation in the department may suffer, perhaps unjustly.

On balance, I think it depends on the individual: their personal financial situation, personality, career goals and stage in the Ph.D. In some cases, I encourage people to do this, although only for one or two semesters. In other cases, I discourage it. The main thing is to make a well-informed decision which takes into account the pros and cons. 

Students also need to be wary of the vested interests of faculty and university management that will push them towards teaching, possibly against the student's best long-term interests.

Aside. I often forget what posts I have written in the past. I really thought I had written this post before. All I could find is one on Should postdocs teach?

I welcome comments, particularly from current and former Ph.D. students who have negotiated this issue. What would you advise?

Friday, May 31, 2024

Straining gnats and swallowing camels on campus

The world is being run by accountants, lawyers, and marketing consultants. They work for corporations, universities, trade unions, and government. Some are "rent seekers" and parasites who project themselves as standing for justice and fairness.

We also live in a world where people are not allowed to make even small mistakes.

These painful realities were highlighted to me this week in an email some colleagues received from the President of their university.

"I am writing to advise staff about the outcome of a pay review initiated in October 2021 and the actions taken as part of this to further strengthen our pay systems and processes. 

The University commenced this external review to ensure our staff are paid accurately and in accordance with our Enterprise Agreement (EA) applicable at the time.

The review took 18 months. How much staff time and money was spent to engage these external consultants on this review? 

While staff were paid for the time they worked, the review identified 2 areas of our EA were not always correctly applied. These relate to the minimum hours of engagement for casual academic and casual professional staff and the use of a different pay rate for casual academic staff with a relevant PhD.

As a result, the University has determined that over a 7-year period (January 2017 to December 2023), an amount of $7.88 million (excluding superannuation and interest) should have been paid to 9743 staff. The median amount to be paid is $243.03.

Note. This corresponds to staff being underpaid by less than one dollar per week! 

I unreservedly apologise to those staff who have been affected by these errors – they should not have happened. I want to assure our community that affected staff will receive all the pay due, including superannuation and interest for the relevant period.

This week, we are writing individually to affected current staff outlining the payment due, with these payments to commence from 14 June.

We will also be writing individually to affected former staff outlining the payment due and the process for remediation.

A comprehensive program is underway to upgrade our systems and processes to further ensure ongoing pay accuracy. Actions being taken include the introduction of a whole-of-Univ timesheet system, compulsory training for managers and staff and additional fortnightly payroll reporting.

Does this mean faculty will now fill out time sheets? 

We are also investing further in our HR systems, with new time and attendance and payroll systems planned.

Sounds like a whole new layer of bureaucracy. 

Throughout this process we have worked closely with the Office of the Fair Work Ombudsman and have advised them of these outcomes.

Again, I apologise that this has occurred and reaffirm our commitment to ensuring staff are appropriately and accurately paid for the work they do on behalf of the University."

Meanwhile I don't hear of any external reviews or apologies for the treatment by university management of people such as Gerd Shroder-TurkDrew Pavlou, Paul Frijters, or James Allan, ...

Two thousand years ago the following was written.

“Woe to you, teachers of the law and Pharisees, you hypocrites! You give a tenth of your spices—mint, dill and cumin. But you have neglected the more important matters of the law—justice, mercy and faithfulness. You should have practiced the latter, without neglecting the former. 

You blind guides! You strain out a gnat but swallow a camel.

Woe to you, teachers of the law and Pharisees, you hypocrites! You clean the outside of the cup and dish, but inside they are full of greed and self-indulgence."

Friday, May 24, 2024

More superconductivity in Hollywood

I wrote a post about superconductivity being central to the plot of the cult-classic movie, Joe Versus the Volcano. A commenter on the post kindly pointed out that the movie Avatar also features superconductivity. It is nicely captured in this scene.

The Wikipedia entry for Unobtanium is interesting as it describes the long history of the term, predating the movie by decades. I had not heard the term before. It does capture much of the hype and fantasy about research in "advanced materials".

About Avatar the entry states

In the 2009 film Avatar,[23] "Unobtanium" is the common name of a rare-earth mineral found exclusively in the exomoon Pandora (where the movie takes place, being the fifth moon of the gas giant Polyphemus, which orbits Alpha Centauri A), highly prized (and priced) because of its application as a powerful superconductor material; because of its unusual magnetic properties, entire mountains with high concentrations of unobtanium "levitate" in the atmosphere of Pandora.

Monday, May 13, 2024

The whole is qualitatively different from the parts: beer, birds, and brains

Pint of Science is an annual event in cities all around Australia. Local scientists give short talks about their research to general audiences. I am speaking tonight, along with my colleague Ben Powell. 

I found the tips to speakers very helpful. This led me to try and make the talk more of a personal story, reduce the amount of text on slides, and aim for engagement rather than focusing on scientific details or on technical details of your own research.

Here is the current version of my slides.

The introduction is based on this video and poem about emergence in economics.

This provides an example of how "free" economic markets can work well sometimes. But I will also point out that they can also fail spectacularly, another emergent phenomenon! 

Wednesday, May 8, 2024

The relevance of Labor Day to physicists and philosophers

This past Monday, May 8, was a public holiday in Queensland, marking Labor Day. I don't know why we don't celebrate it on May 1, but that does not matter.

In honour of the event, I post two relevant resources. The first resource is a moving video by Sabine Hossenfelder, who has carved out a post-academic income as a populariser of physics. The video is funny and sad, describing her own experience in academia leading to "Death of a Dream".


Sabine has many poignant observations about the dysfunctionalities of physics in academia, from the personal to the intellectual.

I find it sad that people who leave academia because they could not find a permanent job see themselves as a "failure." First,  most of the select few who get permanent jobs do so because they are at the right place at the right time, not because they are so much more brilliant and productive than others. Second, there is so much more to life than professional success. Finally, Sabine has been an incredible success. She has been able to popularise physics far beyond what has been achieved by others with big names and lots of resources. Furthermore, Sabine has made a significant contribution to the physics community by calling out hype and BS.

The second resource to mark Labor Day is an article, 

It puts a specific (alarming) incident in the broader context of the history of how and why the governance and management of Australian universities have been captured by the ideology of neoliberalism. This has been facilitated by the opportunism and vanity of mediocre academics who become "managers" with million-dollar salaries.

Monday, April 29, 2024

Emergence of the arrow of time

Time has a direction. Microscopic equations of motion in classical and quantum mechanics have time-reversible symmetry. But this symmetry is broken for many macroscopic phenomena. This observation is encoded in the second law of thermodynamics. We experience the flow of time and distinguish past, present, and future. The arrow of time is manifest in phenomena that occur at scales covering many orders of magnitude. Here are some of these different arrows of time, listed in order of increasing time scales. These are discussed by Tony Leggett in chapter 5 of The Problems of Physics.

Elementary particle physics. CP violation is observed in certain phenomena associated with the weak nuclear interaction, such as the decay of neutral kaons observed in 1964. The CPT symmetry theorem shows that any local quantum field theory that is invariant under the “proper” Lorentz transformations must also be invariant under combined CPT transformations. This means that CP violation means that time-reversal symmetry is broken. In 1989, the direction violation of T symmetry was observed.

Electromagnetism. When an electric charge is accelerated an electromagnetic wave propagates out from the charge towards infinity. Energy is transferred from the charge to its environment. We do not observe a wave that propagates from infinity into the accelerating charge, i.e., energy being transferred from the environment to the charge. Yet this possibility is allowed by the equations of motion for electromagnetism. There is an absence of the “advanced” solution to the equations of motion. 

Thermodynamics. Irreversibility happens in isolated systems. Heat never travels from a cold body to a hotter one. Fluids spontaneously mix. There is a time ordering of the thermodynamic states of isolated macroscopic systems. The thermodynamic entropy encodes this ordering.

Psychological experience. We remember the past and think we can affect the future. We don’t think we can affect the past or know the future.

Biological evolution. Over time species adapt to their environment and become more complex and more diverse.

Cosmology. There was a beginning to the universe. The universe is expanding not contracting. Density perturbations grow independent of cosmic time (Hawking and Laflamme).

It is debatable to what extent these arrows of time are related to one another. 

The problem of how statistical mechanics connects time-reversible microscopic dynamics with macroscopic irreversibility is subtle and contentious. Joel Lebowitz claimed this problem was solved by Boltzmann, provided the distinction between typical and average behaviour are accepted, along with the Past Hypothesis. This states that the universe was initially in a state of extremely low entropy. David Wallace discussed the need to accept the idea of probabilities in law of physics and that the competing interpretations of probability as frequency or ignorance matter. In contrast, David Deutsch claims that the second law of thermodynamics is an “emergent law”: it cannot be derived from microscopic laws, like the principle of testability.

I find the Past Hypothesis fascinating because it connects the arrow of time seen in the laboratory and everyday life (time scales of microseconds to years) to cosmology, covering timescales of the lifetime of the universe (10^10 years) and the “initial” state of the universe, perhaps at the end of the inflationary epoch (10^-33 seconds). This also raises questions about how to formulate the Second Law and the concept of entropy in the presence of gravity and on cosmological length and time scales. 

Monday, April 22, 2024

Effective theories in classical and quantum mechanics

Working in quantum many-body theory, I slowly learned that many key concepts and techniques have predecessors and analogues in classical systems and one-body quantum systems. Examples include Green's functions, path integrals, cumulants, the linked cluster theorem, Hubbard-Stratonavich transformation (completing the square), mean-field theory, localisation due to disorder, and BBGKY hierarchy. Learning a full-blown quantum many-body version is easier if you first understand simpler analogues.

This post is about effective theories in classical systems and one-body quantum systems, following my earlier post about effective theories in quantum field theories of elementary particles.

Michèle Levi has a pedagogical article

Effective field theories of post-Newtonian gravity: a comprehensive review





This is motivated by the use of EFTs to describe gravitational waves produced by the inspiraling and merging of binary black holes and neutron stars. She discusses the different scales involved and how there are effective theories at each scale. She also puts these EFTs in the broader context of other fields.

Analogues in one-body quantum mechanics are also discussed  in

Effective Field Theories, Reductionism and Scientific Explanation, by Stephan Hartmann

"In his beautiful book Qualitative Methods in Quantum Theory, Migdal (1977) discusses an instructive example from quantum mechanics. Let S be a system which is composed of a fast subsystem Sf and a slow subsystem Ss, characterised by two frequencies of and os. It can be shown that the effects of Sf on Ss can be taken into account effectively by adding a potential energy term to the Hamiltonian operator of Ss. In this case, as well as in many other cases, one ends up with an effective Hamiltonian operator for the subsystem characterised by the smaller frequency (or energy)."

An important example of this is the Born-Oppenheimer approximation which is based on the separation of time and energy scales associated with electronic and nuclear motion. It is used to describe and understand the dynamics of nuclei and electronic transitions in solids and molecules. The potential energy surfaces for different electronic states define an effective theory for the nuclei. Without this concept, much of theoretical chemistry and condensed matter would be incredibly difficult.

Tuesday, April 16, 2024

Physics on Netflix


The Netflix series, 3-body Problem, features physics and physicists throughout. I am not a big fan of science fiction, but watched the first episode, to try and get a sense of why the series is attracting so much attention. The opening scene (in the video above) is rooted in history. It depicts a "struggle session" during the Cultural Revolution, featuring the denunciation and killing of a physics professor, who is the father of the main character in the series.

For some more on the intellectual and political background see

Wednesday, April 10, 2024

Effective quantum field theories and hierarchial reality

 Over the last hundred years, there has been a fruitful cross-fertilisation of concepts and techniques between the theory of condensed matter and the quantum theory of elementary particles and fields. Examples include spontaneous symmetry breaking, renormalisation, and BCS theory. Sometimes, these efforts have occurred in parallel and only later did people realise that two different communities were doing essentially the same thing but using different language. Other times, one community adopted ideas or techniques from the other.

Central to condensed matter theory are ideas of emergence, a hierarchy of scales, and effective theories that are valid at a particular scale. Elementary particle theorists such as Steven Weinberg often distinguish themselves as reductionists with different goals and approaches. I only recently became aware that effective field theories have become a big thing in the elementary particle community, and Weinberg has been one of the leaders of this!

There is a helpful article in the CERN Courier, published just a year ago.

A theory of theories

Michèle Levi takes a tour through the past, present and future of Effective Field Theory, with applications ranging from LHC physics to cosmology.

The figure below, taken from the article, shows a hierarchy of energy scales and the corresponding effective field theories (EFTs).

n.b. Energy increases from bottom to top. [This may be confusing for condensed matter physicists, as we tend to put the high-energy theories at the bottom].


SM is the standard model
HQET is heavy quark effective theory in which the heavy quark degrees of freedom are integrated out.
EW breaking is Electro-Weak symmetry breaking which occurs on the scale of the Higgs boson.
The smallest energy scale in the figure is Lamda_QCD which is of the scale of the mass of the proton.

The standard model is now considered an effective field theory.

For the associated history and philosophy, I found this article helpful. Effective Field Theories, Reductionism and Scientific Explanation, by Stephan Hartmann

The decoupling theoremproved by Appelquist and Carazzone in 1975, [cited 2,500 times] is central to EFTs and a hierarchy of scales. 

In its simplest case, this theorem demonstrates that for two coupled systems with different energy scales m1 and m2 (with m2 > m1) and described by a renormalisable theory, there is always a renormalisation condition according to which the effects of the physics at scale m2 can be effectively included in the theory with the smaller scale m1 by changing the parameters of the corresponding theory. The decoupling theorem implies the existence of an EFT at scale m1 which will, however, cease to be applicable once the energy gets close to m2.

There are two distinct approaches to finding effective theories at a particular scale, referred to as bottom-up and top-down approaches. 

Top-down requires one to have a theory at a higher energy scale and then integrate out the high energy degrees of freedom (fields and particles) to find the effective theory for the lower energy scale. This is what Wilson did in his RG approach to critical phenomena. Another example is how string theorists try to derive GR and the Standard Model starting with strings.

Bottom-up can always be done because one does not need to know the higher energy theory. One can often write down the Lagrangian for the EFT based on symmetry considerations and phenomenology. An example is Fermi's theory of beta decay and the weak interactions.

In a previous post, I considered Bei Lok Hu's discussion of these two different routes to developing a quantum theory of gravity.

A major outstanding challenge in the theory of elementary particles and fields is the hierarchy problem: the measured values of some masses and coupling constants are many orders of magnitude different from the "bare" values used in the Lagrangian.

The articles I have read about the role of effective field theories make no mention of the corresponding issues in condensed matter or how emergence is involved. Emergence occurs in systems where there are many interacting components. Here those components are the quantum fields and their components with different momenta/energy. Hence, I would say that emergence is at the heart of big questions in the theory of elementary particles and fields.

Wednesday, April 3, 2024

Is biology better at computing than supercomputers?

Stimulated by discussions about the physics of learning machines with Gerard Milburn, I have been wondering about biomolecular machines such as proteins that do the transcription and translation of DNA in protein synthesis. These are rather amazing machines.

I found an article which considers a problem that is simpler than learning, computation.

The thermodynamic efficiency of computations made in cells across the range of life

Christopher P. Kempes, David Wolpert, Zachary Cohen and Juan Pérez-Mercader


It considers the computation of translating a random set of 20 amino acids into a specific string for a specific protein. Actual thermodynamic values are compared to a generalised Landauer bound for computationBelow is the punchline. (page 9)

Given that the average protein length is about 325 amino acids for 20 unique amino acids, we have that pi=p=1/20325=1.46×10−423, where there are 20325 states, such that the initial entropy is Inline Formula , which gives the free energy change of kT(SI−0)=4.03×10−18 (J) or 1.24×10−20 (J per amino acid). This value provides a minimum for synthesizing a typical protein. 

We can also calculate the biological value from the fact that if four ATP equivalents are required to add one amino acid to the polymer chain with a standard free energy of 47.7 (kJ mol−1) for ATP to ADP, then the efficiency is 1.03×10−16 (J) or 3.17×10−19 (J per amino acid).  

This value is about 26 times larger than the generalized Landauer bound.

These results illustrate that translation operates at an astonishingly high efficiency, even though it is still fairly far away from the Landauer bound. To put these results in context, it is interesting to note that the best supercomputers perform a bit operation at approximately 5.27×10−13 (J per bit). In other words, the cost of computation in supercomputers is about eight orders of magnitude worse than the Landauer bound of Inline Formula (J) for a bit operation, which is about six orders of magnitude less efficient than biological translation when both are compared to the appropriate Landauer bound. Biology is beating our current engineered computational thermodynamic efficiencies by an astonishing degree.

Monday, March 25, 2024

Superconductors in Hollywood

 Recently my wife and I watched the movie, Joe Versus the Volcano, starring Tom Hanks and Meg Ryan. What I did not expect was that making superconductors commercially viable was central to the (silly but amusing) plot. 

The plot summary on Wikipedia says

a wealthy industrialist named Samuel Graynamore needs "bubaru", a mineral essential for manufacturing superconductors. There are deposits of it on the tiny Pacific island of Waponi Woo, but the resident Waponis will only let him mine it if he solves a problem for them...

Here is the relevant scene...

The movie was made in 1990, just after the discovery of cuprate superconductors and at that time there was a lot of hype about commercialisation. I wonder if the scriptwriters drew on that.

Tuesday, March 19, 2024

A light conversation about condensed matter physics

Three weeks ago I did a local book launch for Condensed Matter Physics: A Very Short Introduction.


It was at a wonderful independent bookstore, Avid Reader, It is a vibrant part of the local community and has several author events every week.


I had a conversation about the book with my friend, Dr Christian Heim, an author, composer, and psychiatrist. My wife and daughter were surprised it was so funny. Most people loved it, but a couple of people thought it should have been more technical. I think that is not the point of such an event or of the Very Short Introduction series.


Here is a recording of the conversation, including the Q&A with the audience afterwards.





Many thanks to all the friends who came.

Friday, March 8, 2024

Emergence and the stratification of physics into sub-fields

The concept of emergence is central to understanding sub-fields of physics and how they are related, and not related, to other sub-fields.

The table below shows a stratum of sub-disciplines of physics. For each strata there are a range of length, time, and energy scales that are relevant. There are distinct entities that are composed of the entities from lower strata. These composite entities interact with one another via effective interactions that arise due to the interactions present at lower strata and can be described by an effective theory. Each sub-discipline of physics is semi-autonomous. Collective phenomena associated with a single strata can be studied, described, and understood without reference to lower strata.

Table entries are not meant to be exhaustive but to illustrate how emergence is central to understanding sub-fields of physics and how they are related to one another.

What do you think of the table? Is it helpful? Have you seen something like this before?

I welcome suggestions about entries that I could add.

Tuesday, March 5, 2024

An illusion of purpose in emergent phenomena?

 A characteristic of emergent phenomena in a system of many interacting parts is that they exhibit collective behaviour where it looks like the many parts are "dancing to the same tune". But who is playing the music, who chose it, and who conducts the orchestra?

Consider the following examples.

1. A large group of starlings move together in what appears to be a coherent fashion. Yet, no lead starling is telling all the starlings how and where to move, according to some clever flight plan to avoid a predator. Studies of flocking [murmuration] have shown that each of the starlings just moves according to the motion of a few of their nearest neighbours. Nevertheless, the flock does move in a coherent fashion "as if" there is a lead starling or air traffic controller making sure all the planes stick to their flight plan.

2. You can buy a freshly baked loaf of bread at a local bakery every day. Why? Thousands of economic agents, from farmers to truck drivers to accountants to the baker, make choices and act based on limited local information. Their interactions are largely determined by the mechanism of prices and commercial contracts. In a market economy, no director of national bread supplies who co-ordinates the actions of all of these agents. Nevertheless, you can be confident that each morning you will be able to buy the loaf you want. The whole system acts in a co-ordinated manner "as if" it has a purpose: to reliably supply affordable high-quality bread.

3. A slime mould spreads over a surface containing food supplies with spatial locations and sizes similar to that of the cities surrounding Tokyo. After a few hours, the spread of the mould has reorganised so that it is focussed on paths that are similar to the routes of the Tokyo rail network. Moulds have no brain or computer chip but they can solve optimisation problems, such as finding the shortest path through a complex maze. In nature, this problem-solving ability has the advantage that it allows them to efficiently locate sources of food and nutrients. Slime moulds act "as if" they have a brain.

A biologist Michael Levin discusses the issue of intelligence in very small and primitive biological systems in a recent article, Collective Intelligence of Morphogenesis as a Teleonomic Process

[I first became aware of Levin's work through a podcast episode brought to my attention by Gerard Milburn. The relevant discussion starts around 36 minutes].

The emphasis on "as if" I have taken from Thomas Schelling in the opening chapter of his beautiful book, Micromotives and Macrobehaviour.

He also mentions the example of Fermat's principle in optics: the path light takes as it travels between two spatially separated points is the path for which the travel time is an extremum [usually a minimum]. The light travels "as if" it has the purpose of finding this extremum. 

[Aside: according to Wikipedia, 

"Fermat's principle was initially controversial because it seemed to ascribe knowledge and intent to nature. Not until the 19th century was it understood that nature's ability to test alternative paths is merely a fundamental property of waves."

Similar issues of knowledge/intent/purpose arise when considering the motion of a classical particle moving between two spatial points. It takes the path for which the value of the action [time integral of the Lagrangian along a path] has an extremal value relative to all possible paths. I suspect that the path integral formulation of quantum theory is required to solve the "as if" problem. Any alternative suggestions?

Autobiography of John Goodenough (1922-2023)

  John Goodenough  was an amazing scientist. He made important contributions to our understanding of strongly correlated electron materials,...