Friday, December 23, 2022

Eight amazing things physics has taught us

What are the most amazing things that we know about the physics of the universe? If you were to pick ten what would they be?

I recently read Fundamentals: Ten Keys to Reality (2021) a popular science book by Frank Wilczek. My interest in the book was piqued just to see what Wilczek's choices for his "ten" were. I got a copy from the public library and became entranced because I discovered what a gifted writer and expositor Wilczek is. I found I was learning some physics I did not know; or at least getting a deeper understanding of what I should know. I then bought my own copy so I could annotate it. I have previously enjoyed the insights in many of Wilczeks' Physics Today columns.

The book gives a popular presentation of some physics "basics" such as celestial mechanics, the Standard Model of elementary particles (which he renames the Core), and Big Bang cosmology.  I found it full of insights. I also appreciated that Wilczek does not have the hard reductionist or scientism edge found in the popular books of some distinguished theoretical physicists such as Weinberg and Hawking. However, a careful reading led me at times to be somewhat disappointed and irritated, for reasons that I discuss briefly below. In the end, this is because, not surprisingly, I have a much more emergentist perspective on reality, seeing it as stratified.

First, here are the ten things that Wilczek finds amazing, helpfully summarised in his chapter titles.

Part I. What There Is 

Chapter 1. There's Plenty of Space

Chapter 2. There's Plenty of Time

Chapter 3. There Are Very Few Ingredients

Chapter 4. There Are Very Few Laws

Chapter 5. There's Plenty of Matter and Energy

Part II. Beginnings and Ends 

Chapter 6. Cosmic History is an Open Book

Chapter 7. Complexity Emerges

Chapter 8. There's Plenty More to See

Chapter 9. Mysteries Remain

Chapter 10. Complementarity Is Mind-Expanding

Here are some of the ideas associated with each of the ten keys.

There's Plenty of Space

The scales of the universe are incredible. Beyond us, there is the vast numbers of stars and galaxies, and distances of more than ten billion light years. Within us, each of our bodies contains more atoms than there are stars in the universe. Our brains have as many neurons as there are stars in our galaxy. An atom is largely empty space.

There's Plenty of Time

Cosmic time is abundant. The quantity of time reaching back to the big bang dwarfs a human lifetime... [which] contains far more moments of consciousness than universal history contains human life spans. We are gifted with an abundance of inner time.

There Are Very Few Ingredients

Everything in the universe is made of just a few particles: leptons, quarks, and neutrinos. And forces and the associated bosons, such as photons, gravitons, and gluons. These particles have just a few properties: mass, charge, colour, and spin.

 "The most basic ingredients of physical reality are a few principles and properties. Four simple yet profound general principles govern how the world works.

1. The basic laws describe change.

2. The basic laws are universal.

3. The basic laws are local.

4. The basic laws are precise.

Newton realised locality was a problem. Fields rather than particles are the fundamenta building blocks of matter.

Quasiparticles are discussed. In high school, Wilczek was inspired by a visit to Bell Labs where he learnt that quanta of lattice vibrations are phonons. He describes how he introduced anyons in the early 1980s and how they were then identified with quasiparticles in fractional quantum Hall states.

There Are Very Few Laws

From forces we are led to fields, and from (quantum) fields, we are led to particles.

From particles we are led to (quantum) fields, and from fields, we are led to forces.

Thus, we come to understand that substance and force are two aspects of a common underlying reality.

The four fundamental forces (gravity, electromagnetism, weak nuclear, and strong nuclear) are described by just a few simple mathematical equations.

The art and science of spectroscopy is described as "Atoms sing songs that bare their souls, in light."

Wilczek's Ph.D. work on quark confinement and asymptotic freedom in Quantum Chromodynamics (QCD) was the beginning of QCD being accepted and used.

Newton's gravity theory presented the puzzle of the equivalence of inertial and gravitational mass. Einstein's gravity solved the puzzle and "fulfills Newton's aspiration for a theory of gravity based n local action". 

    "we can portray the majestic logic of general relativity in ten broad             strokes... "

    "John Wheeler, the poet of relativity, summed it up this way: "Space-time     tells matter how to move; matter tells space-time how to bend."

Wilczek makes the debatable and misleading claim that "The equations of QED, QCD, general relativity, and the weak force, ... have powered many advances, including lasers, transistors, nuclear reactors, MRIs, and GPS."

There's Plenty of Matter and Energy

The fact that the amount of solar energy falling on the surface of the earth is vastly greater than current human energy consumption.

The concept of "dynamical complexity" is introduced but not defined. "Music and ritual are purified expressions of dynamical complexity."

"The principle that the essence of human purposes is experienced through flows of information in dynamic complexity, rather than through details of chemistry and physiology, is both mind-expanding and liberating. It challenges us to imagine how minds could emerge elsewhere in the universe, and it prepares us to embrace those minds within our circle of empathy."

To me, this is "mumbo jumbo" and reflects the muddled thinking that occurs when Wilczek wildly extrapolates from "fundamental" physics to broader and deeper questions about humanity. The last chapter has similar weaknesses.

Cosmic History is an Open Book

A lucid short summary is presented of big bang cosmology. What we know and why we know it. The chapter ends with a brief reference to Augustine's prescient insights about time. It is what clocks measure and so time did not exist before the beginning of the universe.

Complexity Emerges

How did the featureless simple "soup" that existed a million years after the big bang develop into the complex universe seen today with structures such as stars, galaxies, planets, and biological life? This short chapter (only eight pages) mostly talks about the role of gravity. The chapter could have been much richer by discussing the emergence of complexity in biology, psychology, and sociology. Again, simple laws can produce complex properties.

There's Plenty More to See

The discovery of the Higgs particle and gravitational wave astronomy are both described. Some speculations are made to connect "Quantum Perception and Self-Perception."

Mysteries Remain

What triggered the big bang? Could it hapen again?

Are there meaningful patterns hidden in the apparent sprawl of fundamental particles and forces?

How, concretely, does min emerge from matter? (Or does it?)

Wilczek describes violation of time reversal invariance (T) in elementary particle physics and the Peccei-Quinn proposal for a new field to explain this. The corresponding particle was dubbed the axion by Wilczek, which fulfilled his high school dream to give a particle that name when he encountered a laundry detergent with that name. The axion "cleans up a problem" in elementary particle physics.



Axions are candidates for dark matter.

Complementarity Is Mind-Expanding

Bohr's concept of complementarity (embodied in wave-particle duality in quantum theory) is embraced. 
Complementarity is the concept that one single thing, when considered from different perspectives, can seem to have very different or even contradictory properties. Complementarity is an attitude toward experiences and problems that Ive found eye-opening an extremely helpful. It has literally changed my mind. Through it, I've become larger: more open to imagination, and more tolerant.
I am no fan of this perspective. I am all for having an open mind, considering a range of perspectives, and living with dialectic (intellectual tensions). However, I do not use quantum theory to justify that. There is a multitude of moral, philosophical, social, and political reasons that provide much more compelling justifications for humility. Bohr's perspective and extrapolations from the atomic world to politics has a long and dubious history that has systematically been debunked by Mara Beller, including in Physics Today.  Nevertheless, these ideas just won't go away, as seen why a recent volume of papers on Quantizing International Relations.

In summary, I love Wilczek's discussions of physics, and I think eight of the ten chapters describe amazing things about the physical world that we have learnt and should contemplate with awe and wonder.  But, two of the chapters make speculations about how the type of theoretical physics that Wilczek has made seminal contributions to is profoundly relevant to technological, social, economic, and political reality. I would much rather draw on the insights and debates from the humanities and social sciences to understand those realities and our place in them.

Tuesday, December 13, 2022

Philosophy in a nutshell

How should we live? What really exists? And how do we know for sure? 

These three questions are at the heart of philosophy as an academic discipline.  This raises the question as to what the "philosophy of physics" is and what it should be? Philosophy of Physics: A Very Short Introduction by David Wallace explores this. He begins by stating that "Daniel Dennett defines philosophy as what we do when we don't know what questions to ask." I found that somewhat unsatisfying and went to The Oxford Companion to Philosophy. 

Most definitions of philosophy are fairly controversial, particularly if they aim to be at all interesting or profound. That is partly because what has been called philosophy has changed radically in scope in the course of history, with many inquiries that were originally part of it having detached themselves from it. The shortest definition, and it is quite a good one, is that philosophy is thinking about thinking. That brings out the generally second-order character of the subject, as reflective thought about particular kinds of thinking—formation of beliefs, claims to knowledge—about the world or large parts of it.

A more detailed, but still uncontroversially comprehensive, definition is that philosophy is rationally critical thinking, of a more or less systematic kind about the general nature of the world (metaphysics or theory of existence), the justification of belief (epistemology or theory of knowledge), and the conduct of life (ethics or theory of value). 

Each of the three elements in this list has a non-philosophical counterpart, from which it is distinguished by its explicitly rational and critical way of proceeding and by its systematic nature. 

Everyone has some general conception of the nature of the world in which they live and of their place in it. Metaphysics replaces the unargued assumptions embodied in such a conception with a rational and organized body of beliefs about the world as a whole. 

Everyone has occasion to doubt and question beliefs, their own or those of others, with more or less success and without any theory of what they are doing. Epistemology seeks by argument to make explicit the rules of correct belief-formation. 

Everyone governs their conduct by directing it to desired or valued ends. Ethics, or moral philosophy, in its most inclusive sense, seeks to articulate, in rationally systematic form, the rules or principles involved. 

The three main parts of philosophy are related in various ways. For us to guide our conduct rationally we need a general conception of the world in which it is carried out and of ourselves as acting in it. Metaphysics presupposes epistemology, both to authenticate the special forms of reasoning on which it relies and to assure the correctness of the large assumptions which, in some of its varieties, it makes about the nature of things, such as that nothing comes out of nothing, that there are recurrences in the world and our experience of it, that the mental is not in space.

On the lighter side here is Philomena Cunk's brief engagement with philosophy.

Friday, December 9, 2022

The wonders and mysteries of bioluminescence

 Members of my family have been reading Phosphorescence: On awe, wonder, and things that sustain you when the world goes dark, a personal memoir by Julia Baird.

This reminded me of how amazing and fascinating bioluminescence is, stimulating me to read more on the science side. One of the first things is to distinguish between bioluminescence, fluorescence, and phosphorescence.

Bioluminescence is chemical luminescence whereby a biomolecule emits a photon through the radiative decay of a singlet excited state that is produced by a chemical reaction. 

In contrast, fluorescence occurs when the singlet excited state is produced by the molecule absorbing a photon.

Phosphorescence occurs when a molecule emits a photon through the radiative decay of an excited triplet state, that was produced by the absorption of a photon.

Bioluminescence can occur in the dark. Fluorescence cannot as there are no photons to absorb. Phosphorescence is sometimes seen in the dark but this is because the molecule absorbs invisible UV light which produces the triplet state which has a very long radiative lifetime.

Baird gives beautiful and enchanted descriptions of seeing "phosphorescence" on her daily early morning ocean swim. She acknowledges that this is actually bioluminescence not phosphorescence. I should stress that in pointing this out I am not "unweaving the rainbow", as for literary purposes using "bioluminescent" would be clunky.

 

There is a useful webpage from a research group at UC Santa Barbara. They also have a detailed review article from which I took the image above.

Steven H.D. HaddockMark A. MolineJames F. Case

A much shorter review that I read this morning is

Bioluminescence in the Ocean: Origins of Biological, Chemical, and Ecological Diversity, by E.A. Widder

An article in Quanta magazine, In the Deep, Clues to How Life Makes Light by Stephanie Yin

So what is the underlying photophysics and quantum chemistry? The following review is helpful.

The Chemistry of Bioluminescence: An Analysis of Chemical Functionalities 

Isabelle Navizet, Ya-Jun Liu, Nicolas Ferré, Daniel Roca-Sanjuán, Roland Lindh

Almost all currently known chemiluminescent substrates have the peroxide bond, -O-O-, in common as a chemiluminophore. This chemical system facilitates the essential mechanism of chemiluminescence—providing a route for a thermally activated chemical ground-state reaction to produce a product in an electronically excited state. The basics of this process can be understood from studies of ... dioxetanone. [it] contains a peroxide bond, [and] fragments like the firefly luciferin system to carbon dioxide.

The squiggly line denotes the bond that is broken to produce the excited singlet state.
The figure below shows the potential energy surface that describes the dynamics leading to the emissive state. Note the presence of two conical intersections.

 

Much of this photophysics can be understood in terms of a "two-site Hubbard model" discussed in this classic paper that I love.

Neutral and Charged Biradicals, Zwitterions, Funnels in S1, and Proton Translocation: Their Role in Photochemistry, Photophysics, and Vision

Vlasta Bonačić-Koutecký, Jaroslav Koutecký, Josef Michl

In simple terms, all that is different in the biomolecular system is that the enzyme and the larger chromophore tune energy levels so that the energy barriers are much smaller so that the steps needed for bioluminescence become accessible at room temperature.

This highlights two fundamental things. 

Chemistry is local. This is relevant to understanding Wannier orbitals in solid state physics, to hydrogen bonding, and how protein structure aids function.

"Biochemistry is the search for the chemistry that works" [in water at room temperature].

Monday, December 5, 2022

Junior faculty position in condensed matter available at UQ

The physics department at the University of Queensland has just advertised for a junior faculty position in condensed matter. Only applications from women will be considered. The advertisement is here and the closing date is January 19.

The photo is of the beach at Bribie Island, my favourite holiday location, about one hours drive away.

Aside: it was gratifying that the last faculty hired in condensed matter at UQ, Peter Jacobson, first heard about the position on this blog.

Thursday, December 1, 2022

How can funders promote significant breakthroughs?

 Is real scientific progress slowing? Are funders of research, whether governments, corporations, or philanthropies, getting a good return on their investment? Along with many others (based largely on intuition and anecdote) I believe that the system is broken, and at many different levels. What are possible ways forward? How might current systems of funding be reformed?

The Economist recently published a fascinating column (in the Finance and Economics section!), How to escape scientific stagnation. It reviews a number of recent papers by economists that wrestle with questions such as those above.

Philanthropists... funding of basic research has nearly doubled in the past decade. All these efforts aim to help science get back its risk-loving mojo.

In a working paper published last year, Chiara Franzoni and Paula Stephan look at a number of measures of risk, based on analyses of text and the variability of citations. These suggest science’s reward structure discourages academics from taking chances.

Another approach in vogue is to fund “people not projects”. A study in 2011 compared researchers at the Howard Hughes Medical Institute, where they are granted considerable flexibility over their research agendas and lots of time to carry out investigations, with similarly accomplished ones funded by a standard NIH programme. The study found that researchers at the institute took more risks. As a result, they produced nearly twice as much highly cited work, as well as a third more “flops” (articles with fewer citations than their previously least-cited work). 

Despite the uncertainty about exactly how best to fund scientific research, economists are confident of two things. The first is that a one-size-fits-all approach is not the right answer,... DARPA models, the Howard Hughes Medical Institute’s curiosity-driven method, and even handing out grants by lottery, as the New Zealand Health Research Council has tried, all have their uses.

The second is that this burst of experimentation must continue. The boss of the NSF, Sethuraman Panchanathan, agrees. He is looking to reassess projects whose reviews are highly variable—a possible indication of unorthodoxy. He is also interested in a Willy Wonka-style funding mechanism called the “Golden Ticket”, which would allow a single reviewer to champion a project even if his or her peers do not agree.  ...many venture-capital partnerships employ similar policies, because they prioritise the upside of long-shot projects rather than seeking to minimise failure. 

The study that I would like to see done is along the following lines. Identify at what age and what type of institution and what type of funding environment, the biggest breakthroughs happen. I suggest that you will find in the U.S.A, that it was done by young faculty at the top 20 institutions in an era when they did not have to worry much about getting grants. If so, then I think most of the money should be given to them!

 

Sunday, November 20, 2022

Emergence in ant colonies

Go to the ant, you sluggard;
    consider its ways and be wise!
It has no commander,
    no overseer or ruler,

yet it stores its provisions in summer
and gathers its food at harvest.

    Proverbs 6:6-8

Ant colonies are amazing. It is incredible what they can achieve. I love the video below. It highlights how complex structures and functions emerge in an ant colony even though there is no individual directing the whole operation.


Ant colonies are often cited as an example of emergence, including how complexity can emerge from simple rules. Ant colonies feature in Godel, Escher, Bach by Douglas Hofstadter, Emergence: from chaos to order by John Holland, and Emergence: The Connected Lives of Ants, Brains, Cities, and Software by Steven Johnson.

Important steps towards describing and understanding a system with emergent properties include identifying how to break down the system into single components and determining how those components interact with one another.

As described in the video the colony is composed of several distinct classes (castes) of ant: soldiers, excavators, foragers, garbage collectors, and gardeners. Each ant has a very limited repertoire of methods to interact with other ants and their environment. Ants have poor hearing and sight. They communicate with a few signals involving touch, but mostly communicate by producing trails of distinct chemicals (pheromones).  Each organic molecule is identified with a specific message such as follow this trail, detection of food, presence of an enemy, or danger.

For an ant colony the components are simple and the interactions between the parts are simple. Nevertheless, complex structures such as bridges and tree houses emerge. There is no chief engineer directing the construction of these structures or a blueprint drawn up by an architect. The queen is not a dictator mandating that the colony must last for her lifetime, which covers many generations of worker ants.

Ant colonies have characteristic properties of emergent systems. The system has properties that the individual components do not. Complex structures can emerge from a system with simple components and interactions. The properties that emerge are hard to predict a priori. That is if one only knew about the properties of individual ants and how they interact, and not the properties of the colony, it would be hard to predict that they could achieve what they do.

Universality is highlighted in a nice review article, The principles of collective animal behaviour by D.J.T Sumpter. Some of the abstract is below.

I argue that the key to understanding collective behaviour lies in identifying the principles of the behavioural algorithms followed by individual animals and of how information flows between the animals. These principles, such as positive feedback, response thresholds and individual integrity, are repeatedly observed in very different animal societies. The future of collective behaviour research lies in classifying these principles, establishing the properties they produce at a group level and asking why they have evolved in so many different and distinct natural systems.

Thursday, November 10, 2022

Who should get to attend elite universities in the USA?

Equitable access to good education is a desirable goal. Yet it rarely happens and debate about how to achieve it can be diluted by focusing on access to elite institutions and on "culture war" rhetoric.

This week The Economist had a leader (editorial) about admission policies for universities in the USA. Below I reproduce some of the leader, highlighting some points I found poignant.

A diversity of backgrounds in elite institutions is a desirable goal. In pursuing it, though, how much violence should be done to other liberal principles—fairness, meritocracy, the treatment of people as individuals and not avatars for their group identities? At present, the size of racial preferences is large and hard to defend. The child of two college-educated Nigerian immigrants probably has more advantages in life than the child of an Asian taxi driver or a white child born into Appalachian poverty. Such backgrounds all add to diversity. But, under the current regime, the first is heavily more favoured than the others.

Racial preferences are not, however, the most galling thing about the ultra-selective universities that anoint America’s elite. ...A startling 43% of white students admitted to Harvard enjoy some kind of non-academic admissions preference: being an athlete, the child of an alumnus, or a member of the dean’s list of special applicants (such as the offspring of powerful people or big donors). 

A cynic could argue that racial balancing works as a virtue-signalling veneer atop a grotesquely unfair system. A study published in 2017 found that most of Harvard’s undergraduates hailed from families in the top 10% of the income distribution. Princeton had more students from the top 1% than the bottom 60%. When this is the case, it seems unfair that it is often minority students—not the trust-funders—who have their credentials questioned. University presidents and administrators who preen about all their diverse classes might look at how Britain—a country of kings, queens, knights and lords—has fostered a university system that is less riven with ancestral privilege.

...Legacy admissions should be ended. Colleges claiming that alumni donations would wither without them should look to Caltech, MIT and Johns Hopkins— ....[who all] ditched the practice..

In some ways, the question of who gets into a handful of elite universities is a distraction from the deeper causes of social immobility in America. Schooling in poorer neighbourhoods was dismal even before covid-19. The long school closures demanded by teachers’ unions wiped out two decades of progress in test scores for nine-year-olds, with hard-up, black and Hispanic children worst affected. Efforts to help the needy should start before birth and be sustained throughout childhood. Nothing the Supreme Court says about the consideration of race in college admissions will affect the more basic problem, that too few Americans from poorer families are sufficiently well-nurtured or well-taught to be ready to apply to college. However the court rules, that is a debate America needs to have.

On a related note, Malcolm Gladwell has a fascinating podcast episode, Outliers, Revisited  that brings out some of the issues including, how privileged parents game the system for their children.

Thursday, November 3, 2022

Did Turing really "explain" pattern formation?

 Exactly seventy years ago, Alan Turing published a seminal article, in which he proposed a simple reaction-diffusion model for pattern formation in biological systems. The basic idea is that there are two molecules (morphogens) that react with one another chemically and also diffuse through the system.


The potential relevance of the model can be seen by comparing the lower panels below. The left panel is a real fish and the right panel shows the results of a simulation. The figure above is taken from a beautiful review article published a decade ago.

Reaction-Diffusion Model as a Framework for Understanding Biological Pattern Formation  Shigeru Kondo and Takashi Miura

The authors state that the model is not accepted by many experimental biologists and hope their review will lead to a greater engagement with it. Some of the reasons are related to issues in the philosophy of science and how to model complex systems. What is an explanation? What is the role of simple models for complex systems that ignore so many details?

Kondo and Miura point out the universality of the reaction-diffusion model in the sense that a similar model can be derived where the "molecules" are instead a circuit of cellular signals. Diffusion can be replaced by a relay of signals between cells. Alfred Gierer and  Hans Meinhardt in 1972 showed that all that is required is a network with "a short-range positive feedback [competing] with a long-range negative feedback."

A short video from the Sante Fe Institute also provides a helpful introduction including some simulations.

 

There is another problem with Turing's model that is succinctly described in the opening paragraph a recent PRL. In a system with two molecular species, patterns only form when there is a large difference between the diffusivity of the two molecules. However, this seems unrealistic because one expects the molecules to have comparable diffusivities.

Turing’s Diffusive Threshold in Random Reaction-Diffusion Systems 
Pierre A. Haas and Raymond E. Goldstein 
 In 1952, Turing described the pattern-forming instability that now bears his name [1]: diffusion can destabilize a fixed point of a system of reactions that is stable in well-mixed conditions. Nigh on threescore and ten years on, the contribution of Turing’s mechanism to chemical and biological morphogenesis remains debated, not least because of the diffusive threshold inherent in the mechanism: chemical species in reaction systems are expected to have roughly equal diffusivities, yet Turing instabilities cannot arise at equal diffusivities [2,3]. It remains an open problem to determine the diffusivity difference required for generic systems to undergo this instability, yet this diffusive threshold has been recognized at least since reduced models of the Belousov–Zhabotinsky reaction [4,5] only produced Turing patterns at unphysically large diffusivity differences.

I first became aware of this paper through a commentary by Changbong Hyeon, at the Journal Club for Condensed Matter. It is also helpful because it explains the simple mathematics behind the threshold value of the model parameters for pattern formation. 

Thursday, October 27, 2022

A few things I have learnt from professional editors

 Until a few years ago I had never engaged with or received feedback from my writing from a professional editor. This is because the only genre I wrote that involved an editor was papers for scientific journals. But the editors of journals are not really editors in the literary sense. They are more like gatekeepers. Colleagues and collaborators may provide feedback on written work, but again they are amateurs.

In the past few years, I have been writing some popular articles and a popular book and have been part of a writing group. In the process, I have engaged with several professional editors. They were getting paid to make my writing better. I have learnt a lot. Here are a few of the things. On the one hand, some of this may not seem that relevant to scientific articles and grant applications. On the other hand, think of the joy of reading a beautiful scientific article, such as those by Roald Hoffmann. Think of how many papers you try to read and you cannot figure out what they are actually about. Also, I think this is particularly relevant to writing review articles, somewhat of a lost art.

Can it be shorter? Most of the writing I have worked with editors on had a strict word limit. I struggled to stay within it. However, the editors forced/helped me in two ways. First, the fixed word limit helped me structure the work and be realistic about the volume of content. For example, for my Very Short Introduction, I broke down the 35,000-word limit to ten chapters, each of about 3500 words. This made the writing quite manageable. Second, editors helped by cutting out content that was not essential, even when I loved it. Third, editors rewrote some of my sentences making them both shorter and clearer. Seeing their improvements I became aware of some of my bad habits.

Find your voice and tell a story. We are all unique and each piece of writing is unique and is making a unique point. Don't try and be someone else. A grant application needs to make the case that your proposed project is unique and that you are uniquely qualified to do it. Your writing will be more engaging and compelling if it expresses your unique perspective and there is a natural narrative.

A few of these suggestions overlap with some of Stephen King's writing tips. 

Tuesday, October 18, 2022

Self-organisation in complex fluids

 I am at the beach this week and so a lot of time is spent staring at waves, clouds, sunsets, and patterns in the sand. There is a lot of beauty and a lot of beautiful science, most of which I know only a little about. For example, what is the essential physics and simplest theory that can explain the patterns below?


To start understanding the beautiful patterns seen in natural systems I have found helpful the two-page Quick Study in Physics Today

The universe in a cup of coffee by John Wettlaufer
Your morning java or tea is a rotating, cooling laboratory that reflects the physics of such large-scale phenomena as stellar dynamics and energy transport in Earth’s atmosphere and oceans. 
A nice demonstration is to put the hot liquid in a glass jar and then just add a few drops of cold milk and see the beautiful patterns that emerge.

The key idea is there is a balance between thermal bouyancy (hot air rises) and viscous stresses. This balance can lead to symmetry breaking and self-organisation. In planetary systems rotation can play a significant role, particularly when there is a balance of viscous forces and the coriolis force. This can lead to the formation of vortices. The Quick Study includes snapshops from a video that is worth watching,  supplementary material from this PRL.

The article also discusses the importance of Rayleigh-Benard convection in many geophysical phenomena. Something interesting I learnt is that this is actually a misnomer, as is often the case in science. According to Wikipedia, 
This pattern of convection, whose effects are due solely to a temperature gradient, was first successfully analyzed in 1916 by Lord Rayleigh (1842–1919).[16] Rayleigh assumed boundary conditions in which the vertical velocity component and temperature disturbance vanish at the top and bottom boundaries (perfect thermal conduction). Those assumptions resulted in the analysis losing any connection with Henri Bénard's experiment. This resulted in discrepancies between theoretical and experimental results until 1958, when John Pearson (1930– ) reworked the problem based on surface tension.[9] This is what was originally observed by Bénard.

Tuesday, October 11, 2022

Systemic flaws that are undermining good science

Everyone likes to be right. But, sometimes I really wish I was wrong, particularly about problems I see in the world. I wish I was wrong about science being broken. Some of these issues I discuss in the final chapter of Condensed Matter Physics: A Very Short Introduction, due to the relevance of these problems to the future of the field.

Similar concerns were discussed with greater clarity, way back in 2014, by four scientists who are much more experienced and distinguished than I am. 

Rescuing US biomedical research from its systemic flaws 
Bruce Alberts, Marc W. Kirschner, Shirley Tilghman, and Harold Varmus

Positions the different authors have held include President of the US Academy of Sciences, President of Princeton University, and Director of the National Institutes of Health.

Although the article focuses on biomedical research I think the three words "medicine, biomedical, and biology" could be replaced respectively with "technology, materials science, and condensed matter physics" almost everywhere in the article. 

Here are a few quotes.

The long-held but erroneous assumption of never-ending rapid growth in biomedical science has created an unsustainable hypercompetitive system that is discouraging even the most outstanding prospective students from entering our profession—and making it difficult for seasoned investigators to produce their best work. This is a recipe for long-term decline, and the problems cannot be solved with simplistic approaches. Instead, it is time to confront the dangers at hand and rethink some fundamental features of the US biomedical research ecosystem.
... the remarkable outpouring of innovative research from American laboratories—high-throughput DNA sequencing, sophisticated imaging, structural biology, designer chemistry, and computational biology—has led to impressive advances in medicine and fueled a vibrant pharmaceutical and biotechnology sector. In the context of such progress, it is remarkable that even the most successful scientists and most promising trainees are increasingly pessimistic about the future of their chosen career.
... hypercompetition for the resources and positions that are required to conduct science suppresses the creativity, cooperation, risk-taking, and original thinking required to make fundamental discoveries.
The system now favors those who can guarantee results rather than those with potentially path-breaking ideas that, by definition, cannot promise success. Young investigators are discouraged from departing too far from their postdoctoral work, when they should instead be posing new questions and inventing new approaches. Seasoned investigators are inclined to stick to their tried-and-true formulas for success rather than explore new fields. 
One manifestation of this shift to short-term thinking is the inflated value that is now accorded to studies that claim a close link to medical practice. Human biology has always been a central part of the US biomedical effort... Many surprising discoveries, powerful research tools, and important medical benefits have arisen from efforts to decipher complex biological phenomena in model organisms. In a climate that discourages such work by emphasizing short-term goals, scientific progress will inevitably be slowed, and revolutionary findings will be deferred.
As competition for jobs and promotions increases, the inflated value given to publishing in a small number of so-called “high impact” journals has put pressure on authors to rush into print, cut corners, exaggerate their findings, and overstate the significance of their work. 
The development of original ideas that lead to important scientific discoveries takes time for thinking, reading, and talking with peers. Today, time for reflection is a disappearing luxury for the scientific community. 
...administrative tasks are taking up an ever-increasing fraction of the day and present serious obstacles to concentration on the scientific mission itself. 

The following is particularly true of luxury journals. 

Professional editors are increasingly serving in roles played in the past by working scientists and can undermine the enterprise when they base judgments about publication on newsworthiness rather than scientific quality. 
Even after they have landed a research position in academia or research institutes, new investigators wait an average of 4–5 y to receive federal funding for their work compared with 1 y in 1980 (2). Two stark statistics tell much of the tale—the average age at which PhD recipients assume their first tenure-track job is 37 y, and they are approaching 42 y when they are awarded their first NIH grant.

Although it varies across fields and individuals, I get the impression that most scientists do their best work in the rough age range of 35-45. Currently, people are spending most of these years looking for a permanent job and then applying for grants, rather than actually doing science.

The graph below shows just how much the system changed in just thirty years. NIH grants became "gentrified". In different words, all the grants now go to "old farts" doing the same old thing, rather than to "young turks" who want to try new things and have a real impact.

Percentage of NIH R01 Principal Investigators aged 36 and younger and aged 66 and older, 1980–2010


The authors did make some concrete proposals and in a follow-up article, they discuss a broader meeting held to discuss the issues.

Addressing systemic problems in the biomedical research enterprise

To what extent progress has been made in the biomedical community in the past eight years I do not know.

Friday, October 7, 2022

Probing the relationship between superexchange and superconductivity in cuprates

One of the most basic ideas in science is the controlled experiment. A single "independent" variable is changed while all others are held fixed. One then observes how the properties of the system change. Unfortunately, reality is more complicated and there are rarely any truly independent variables, particularly in materials science.

Since the discovery of cuprate superconductors one-quarter of a century ago there has been a constant struggle to tease out systematic trends that can provide insight into the underlying physics causing the superconductivity. This is a challenge because it is difficult to change only one variable. For example, a key property is how the superconductivity changes with the chemical composition of the material, particularly with regard to the doping level, i.e., the density of charge carriers. The problem is that with changes in doping, many other things change as well: the amount of disorder, the periodicity and strength of magnetic interactions, crystal structure, ... 

There is a beautiful experimental paper that recently overcomes these problems. 

On the electron pairing mechanism of copper-oxide high temperature superconductivity

Shane M. O’Mahony, Wangping Ren, Weijiong Chen,  Yi Xue Chong, Xiaolong Liu, H. Eisaki, S. Uchida, M. H. Hamidian, and J. C. Séamus Davis 

In a very clever way they can do all their measurements on a single material of fixed chemical composition, and yet vary a key parameter, the size of the energy difference between the relevant oxygen and copper electronic states, Epsilon.

In the material under study,  Bi2Sr2CaCu2O8+xthere are CuO5 units, as pictured below. In the crystal there is a modulation of delta, the distance at which the fifth oxygen sits above the CuO4 squares that form the square lattices that comprise the layers responsible for the superconductivity.

Due to electrostatics, the distance delta has an effect on the energy Epsilon. This in turn changes the size of the magnetic superexchange between neighbouring copper spins, as pictured below.

In the experiment, a STM is used to measure how Epsilon varies as delta varies (see the red dots in the Figure below). We then expect this to vary the superexchange.

An electron-pair (Josephson) STM is used to measure the magnitude of the superfluid density (electron-pair density) and how it changes with delta (see the blue dots in the figure below).

These two sets of measurement are combined in the second figure below. 

The yellow band in the figure above is the range of values expected from theory, including the recent paper.

Oxygen hole content, charge-transfer gap, covalency, and cuprate superconductivity

Nicolas Kowalski, Sidhartha Shankar Dash, Patrick Sémon, David Sénéchal, and André-Marie Tremblay

The theory is based on DMFT calculations for a three-band Hubbard model, following earlier work including by Weber, Haule, Kotliar, and independently by Maier.

Quanta magazine has a popular report on the experiment. The headline, "High-Temperature Superconductivity Understood at Last", overstates the significance of the experiment.

There are still issues of correlation versus causality. I would also like to see what other theories predict for the relationship between Epsilon and the pairing density. Nevertheless, it is a beautiful experiment and marks a significant advance.

Thursday, September 15, 2022

The wonders of gallium

 A friend recently showed me that solid gallium can melt in your hand.

I did not know this. I was quite familiar with liquid mercury, but not gallium. 

The existence of elemental gallium was predicted by Mendeleev in 1869 after he constructed the periodic table. It was discovered within six years. He was able to predict that it would have a low-melting temperature, based on extrapolations from the known melting temperatures of elements close to it in the periodic table.

Solid gallium is soft enough to be cut with a knife.

Three different stable crystal structures for solid gallium are shown below.


The phase diagram of pure gallium is shown below.

Note the negative slope of the phase boundary between the liquid and the solid alpha-Ga. This is like water. It follows from the Clausius-Clapeyron equation that the solid state has lower density than the liquid state. Gallium is the only elemental metal with this property. (The semi-metals antinomy and bismuth also do).

Gallium remains liquid over a wider range of temperatures (2373 K) than any other known substance.

The figures above are taken from the following paper from 2020.

Ab initio phase diagram and nucleation of gallium  Haiyang Niu, Luigi Bonati, Pablo M. Piaggi, and Michele Parrinello

Unfortunately, that paper does not provide much insight into the low melting temperature. The key is that the solid state contains dimers of Ga, that are weakly bonded to each other. A helpful discussion is the introduction to the following paper.

On the bonding of Ga2, structures of Gan clusters and the relation to the bulk structure of gallium 

N. Gaston and A.J. Parker

The image above is from the entry on Gallium in the beautiful book The Elements by Theodore Gray.

I thank my young friend Alexey for introducing me to the wonders of gallium.

Thursday, September 8, 2022

Very Short Introduction can be pre-ordered

 


I am currently working on the proofs and index for Condensed Matter Physics: A Very Short Introduction. It is wonderful to have got to this stage.

It is slated for release on December 29. It can be pre-ordered from Oxford UP (GDP 9) , Amazon (US $12), Book Depository (US $16), ...

Friday, September 2, 2022

The value of "simple" models for complex systems

Significant understanding of emergent phenomena in quantum materials has come from the study of model Hamiltonians such as those associated with the names Hubbard, Anderson, Kondo, Heisenberg, Kitaev, Haldane, BCS,...

I had not appreciated until recently that an early key to the Modern Synthesis of evolutionary biology (that brought together Darwinian natural selection with Mendelian genetics) was the development of simple mathematical models. The discussion below is taken from

Towards a unified science of cultural evolution 
Alex Mesoudi, Andrew Whiten and Kevin N. Laland 
Significant advances were made in the study of biological [micro]evolution before its molecular basis was understood, in no small part through the use of simplified mathematical models, pioneered by Fisher (1930), Wright (1931), and J.B.S. Haldane (1932)... 
Mathematical models such as [those for cultural evolution and gene-culture coevolution] are often treated with suspicion and even hostility by some social scientists, who consider them to be oversimplifications of reality... The alternatives..., however, are usually either analysis at a single (purely genetic or purely cultural) level or vague verbal accounts of “complex interactions,” neither of which we believe to be productive. Gene-culture analyses have repeatedly revealed circumstances under which the interactions between genetic and cultural processes lead populations to different equilibria than those predicted by single level models or anticipated in verbal accounts... as illustrated by the aforementioned examples of dairy farming and handedness.  
Interestingly, fifty years ago the same reservations about simplifying assumptions were voiced about the use of population genetic models in biology by the prominent evolutionary biologist Ernst Mayr (1963). He argued that using such models was akin to treating genetics as pulling coloured beans from a bag (coining the phrase “beanbag genetics”), ignoring complex physiological and developmental processes that lead to interactions between genes. 
 

In his classic article “A Defense ofBeanbag Genetics,” J. B. S. Haldane (1964) countered that the simplification of reality embodied in these models is the very reason for their usefulness. Such simplification can significantly aid our understanding of processes that are too complex to be considered through verbal arguments alone, because mathematical models force their authors to specify explicitly and exactly all of their assumptions, to focus on major factors, and to generate logically sound conclusions. Indeed, such conclusions are often counterintuitive to human minds relying solely on informal verbal reasoning. 

Haldane (1964) provided several examples in which empirical facts follow the predictions of population genetic models in spite of their simplifying assumptions, and noted that models can often highlight the kind of data that need to be collected to evaluate a particular theory. Ultimately, Haldane won the argument, and population genetic modelling is now an established and invaluable tool in evolutionary biology (Crow 2001). We can only echo Haldane’s defence and argue that the same arguments apply to the use of similar mathematical models in the social sciences.

A more recent version of J.B. S. Haldane's argument is Not Just a Theory—The Utility of Mathematical Models in Evolutionary Biology Maria R. Servedio,Yaniv Brandvain, Sumit Dhole, Courtney L. Fitzpatrick, Emma E. Goldberg, Caitlin A. Stern, Jeremy Van Cleve, D. Justin Yeh 


All models are wrong but some are useful. I first learnt this aphorism from Scott Page, in his wonderful course Model Thinking at Coursera.  This short talk discusses how models help us think more clearly. Simple quantitative models, such as agent-based models, in the social sciences, have the value that their assumptions can be clearly stated, and then the consequences of these assumptions can be investigated in a rigorous manner.

There is also a nice discussion of the importance of model building for science in John Holland's beautiful book, Emergence. 

Monday, August 22, 2022

Hysteresis, hype, niches, nudges and social change

The world is a mess. Most people want a better world. Sometimes nothing changes. Sometimes things change incredibly rapidly. Sometimes changes are positive. Other times the change is negative. Often this change is unanticipated, even by experts who have been studying the relevant topic for decades. Wicked problems are things that seem to be incredibly resilient to change. Examples of rapid changes that were (largely) positive and unanticipated were the peaceful collapse of the former Soviet empire, smoking in public becoming taboo, and increased public concern about climate change. Examples of negative changes include the rise of Trumpism, misinformation on social media, and the global financial crisis of 2008.

Many people in government, public policy, NGOs, and social activists want to implement policies and take actions that will produce outcomes that (they believe) are positive. Here I discuss some basic but very important insights from "social physics", such as discussed in my previous two posts.

Suppose the system of interest can be modelled by some type of Ising model where the pseudospin corresponds to two choices (good and bad) for each agent in the system. The policy maker wants to change something such as increase the incentive for agents to make the "good" choice. There are two qualitatively different possible behaviours and they are shown in the Figure below (taken from Bouchaud). 

The vertical axis is the "magnetisation", i.e, the fraction of agents who make the good choice. The horizontal axis is the "external field", i.e, the level of incentive provided for agents to make the good choice. 


Case I. Smooth curve (blue). This occurs when the interaction between agents is weaker than some threshold strength. Suppose that a small but not insignificant minority of agents are already making the good choice and then incentive is increased slightly. If one is near the steep part of the blue curve then this "nudge" can produce a desired outcome for the society.

Case II. Discontinuous curve (red). This occurs when the interaction between agents is greater than some threshold strength. People's choices are influenced more by their friends than by what the government or an NGO is telling them to do. Then one has to provided very large incentives to get a change in agent choice, far beyond the incentive required for a single isolated agent. The system is stuck in a state that is not good for the society as a whole. It is a metastable state, as shown in the figure below.

On the other hand, if the "polarisation field" is sitting near a critical value (5 in the figure, a tipping point), then a "nudge" can lead to a dramatic change for good. 

I think there are important implications for social activists of all stripes. Realistic expectations are key.

1. Don't expect even the best-designed and well-intentioned policy or action to necessarily have the impact you hope for.

2. Be sceptical about hype and ideology. In the public space there are a lot of claims, whether from political parties, pundits, or NGOs, that if we just do X (change this law, donate money, do what my book says, ...) then the good Y will inevitably follow.

The problem with unrealistic expectations is that they lead to disappointment, disillusionment, and burnout. People give up. Then the next fad or "silver bullet" comes along...

Inspired by a rugged landscape perspective, a better and more sustainable approach is that of learning and adaptation. One identifies what one thinks the best "nudge" is, tries something, evaluates the effect, adapts, and tries out some new ideas. One does not claim or expect the first few iterations to produce a significant desired effect. Here, somewhat "random" sampling of the landscape may help. Here a diversity of perspectives and methods can play a positive role. A more concrete version of this argument is in a paper concerned with public health initiatives. Rugged landscapes: complexity and implementation science, by Joseph T. Ornstein, Ross A. Hammond, Margaret Padek, Stephanie Mazzucca & Ross C. Brownson 

Postscript. After posting this I remember reading a recent article in The Economist pointing out how nudges often do not work.

Evidence for behavioural interventions looks increasingly shaky 
The academic literature is plagued by publication bias 

It references three recent Letters in PNAS, including this one, that come to the opposite conclusion to an earlier PNAS paper.
Stephanie Mertens, Mario Herberz, Ulf J. J. Hahnel, and Tobias Brosch

Friday, August 12, 2022

Sociological insights from statistical physics

Condensed matter physics and sociology are both about emergence. Phenomena in sociology that are intellectually fascinating and important for public policy often involve qualitative change, tipping points, and collective effects. One example is how social networks influence individual choices, such as whether or not to get vaccinated. In my previous post, I briefly introduced some Ising-type models that allow the investigation of fundamental questions in sociology. The main idea is to include heterogeneities and interactions in models of decision. 

What follows is drawn from Sections 2 and 3 of the following paper from the Journal of Statistical Physics. 

Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges by Jean-Philippe Bouchaud

Bouchaud first considers a homogeneous population which reaches an equilibrium state. This is then described by an Ising model with an interaction (between agents) J, in an external field, F that describes the incentive for the agents to make one of the choices. The state of the model (in the mean-field approximation) is then found by solving the Curie-Weiss equation. In the sociological context, this was first derived by Weidlich and in the economic context re-derived by Brock and Durlauf.  (Aside: The latter paper is in one of the "top-five" economic journals, was published five years after submission, and has been cited more than 2000 times.)

As first noted by Weidlich, a spontaneous “polarization” of the population occurs in the low noise regime β>β c , i.e. [the average equilibrium value of S_z] ϕ ∗≠1/2 even in the absence of any individually preferred choice (i.e. F=0). When F≠0, one of the two equilibria is exponentially more probable than the other, and in principle the population should be locked into the most likely one: ϕ ∗>1/2 whenever F>0 and ϕ ∗<1/2 whenever F<0.

Unfortunately, the equilibrium analysis is not sufficient to draw such an optimistic conclusion. A more detailed analysis of the dynamics is needed, which reveals that the time needed to reach equilibrium is exponentially large in the number of agents, and as noted by Keynes, "in the long run, we are all dead." This situation is well-known to physicists, but is perhaps not so well appreciated in other circles—for example, it is not discussed by Brock and Durlauf.

Bouchaud then discusses the meta-stability associated with the two possible polarisations, as occurs in a first-order phase transition. From a non-equilibrium dynamical analysis, based on a Langevin equation, 

one finds that the time τ needed for the system, starting around ϕ=0, to reach ϕ ∗≈1 is given by: 𝜏 ∝ exp[𝐴𝑁(1−𝐹/𝐽)], where A is a numerical factor. This means that whenever 0<F<J, the system should really be in the socially good minimum ϕ ∗≈1, but the time to reach it is exponentially large in the population size.  The important point about this formula is the presence of the factor N(1−F/J) in the exponential.

In other words, it has no chance of ever getting there on its own for large populations. Only when F reaches J, i.e. when the adoption cost C becomes zero will the population be convinced to shift to the socially optimal equilibrium...

This is very different from the standard model of innovation diffusion, based on a simple differential equation proposed by Bass in 1969 [cited more than 10,000 times].

In physics, the existence of mutually inaccessible minima with different potentials is a pathology of mean-field models that disappears when the interaction is short-ranged. In this case, the transition proceeds through “nucleation”, i.e. droplets of the good minimum appear in space and then grow by flipping spins at the boundaries. 

This suggests an interesting policy solution when social pressure resists the adoption of a beneficial practice or product: subsidize the cost locally, or make the change compulsory there, so that adoption takes place in localized spots from which it will invade the whole population. The very same social pressure that was preventing the change will make it happen as soon as it is initiated somewhere.

This analysis provides concepts to understand wicked problems. Societies get "trapped" in situations that are not for the common good and outside interventions, such as providing incentives for individuals to make better choices, have little impact.

In the next post, I hope to discuss the role of heterogeneity (i.e. the role of a random field in the Ising model). A seminal paper published in the American Journal of Sociology in 1978 is Threshold models of collective behavior  by Mark Granovetter. It has been cited more than 6000 times. The central idea is how changes in heterogeneity can induce a transition between two different collective states.

Aside: The famous Keynes quote was in his 1923 publication, The Tract on Monetary Reform. The fuller quote is “But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us, that when the storm is long past, the ocean is flat again.”

Autobiography of John Goodenough (1922-2023)

  John Goodenough  was an amazing scientist. He made important contributions to our understanding of strongly correlated electron materials,...