Tuesday, June 17, 2025

Lamenting the destruction of science in the USA

I continue to follow the situation in the USA concerning the future of science with concern. Here are some of the articles I found most informative (and alarming).

Trump Has Cut Science Funding to Its Lowest Level in Decades (New York Times). It has helpful graphics.

On the proposed massive cuts to the NSF budget, the table below [courtesy of Doug Natelson] is informative and disturbing. At the end of the day it is all about people [real live humans and investment in human capital]

APS News | US physics departments expect to shrink graduate programs [I was quite surprised the expect shrinking isn't greater].

From an Update on NSF Priorities

Are you still funding research on misinformation/disinformation?

Per the Presidential Action announced January 20, 2025, NSF will not prioritize research proposals that engage in or facilitate any conduct that would unconstitutionally abridge the free speech of any American citizen. NSF will not support research with the goal of combating "misinformation," "disinformation," and "malinformation" that could be used to infringe on the constitutionally protected speech rights of American citizens across the United States in a manner that advances a preferred narrative about significant matters of public debate.

The Economist had a series of articles in the May 24 issue [Science and Technology section] that put the situation concerning research and universities in a broader context. The associated editorial is MAGA’s assault on science is an act of grievous self-harm, featuring the graphic below.

I welcome comments and suggestions of other articles.

Wednesday, June 11, 2025

Pattern formation and emergence

Patterns in space and/or time form in fluid dynamics (Rayleigh-Bénard convection and Taylor-Couette flow), laser physics, materials science (dendrites in the formation of solids from liquid melts), biology (morphogenesis), and chemistry (Belousov-Zhabotinsky reactions). External constraints, such as temperature gradients, drive most of these systems out of equilibrium. 

Novelty. 

The parts of the system can be viewed as the molecular constituents or small uniform parts of the system. In either case, the whole system has a property (a pattern) that the parts do not have.

Discontinuity. 

When some parameter becomes larger than a critical value, the system transitions from a uniform state to a non-uniform state. 

Universality. 

Similar patterns, such as convection rolls in fluids, can be observed in diverse systems regardless of the microscopic details of the fluid. Often, there is a single parameter, such as the Reynolds number, which involves a combination of fluid properties, that determines the type of patterns that form. Cross and Hohenberg highlighted how the models and mechanisms of pattern formation across physics, chemistry, and biology have similarities. Turing’s model for pattern formation in biology associated it with concentration gradients of reacting and diffusing molecules. However, Gierer and Meinhardt showed that it is sufficient to have a network with competition between short-range positive feedback and long-range negative feedback. This could occur in a circuit of cellular signals.

Self-organisation. 

The formation of a particular pattern occurs spontaneously, resulting from the interaction of the many components of the system.

Effective theories. 

A crystal growing from a liquid melt can form shapes such as dendrites. This process involves instabilities of the shape of the crystal-liquid interface. The interface dynamics are completely described by a few partial differential equations that can be derived from macroscopic laws of thermodynamics and heat conduction. A helpful review is by Langer. 

Diversity. 

Diverse patterns are observed, particularly in biological systems. In toy models, such as the Turing model, with just a few parameters, a diverse range of patterns, both in time and space, can be produced by varying the parameters. Many repeated iterations can lead to a diversity of structures. This may result from a sensitive dependence on initial conditions and history. For example, every snowflake is different because, as it falls, it passes through a slightly different environment, with small variations in temperature and humidity, compared to others.

Toy models. 

Turing proposed a model for morphogenesis in 1952 that involved two coupled reaction-diffusion equations. Homogeneous concentrations of the two chemicals become unstable when the difference between the two diffusion constants becomes sufficiently large. A two-dimensional version of the model can produce diverse patterns, many resembling those found in animals. However, after more than seventy years of extensive study, many developmental biologists remain sceptical of the relevance of the model, partly because it is not clear whether it has a microscopic basis. Kicheva et al., argue that “pattern formation is an emergent behaviour that results from the coordination of events occurring across molecular, cellular, and tissue scales.” 

Other toy models include Diffusion Limited Aggregation, due to Witten and Sander, and Barnsley’s iterated function system for fractals that produces a pattern like a fern.


Here is a beautiful lecture on Pattern Formation in Biology by Vijaykumar Krishnamurthy

 

Monday, May 26, 2025

Emergence and quantum theories of gravity

Einstein’s theory of General Relativity successfully describes gravity and large scales of length and mass. In contrast, quantum theory describes small scales of length and mass. Emergence is central to most attempts to unify the two theories. Before considering specific examples, it is useful to make some distinctions.

First, a quantum theory of gravity is not necessarily the same as a theory to unify gravity with the three other forces described by the Standard Model. Whether the two problems are inextricable is unknown.

Second, there are two distinct possibilities on how classical gravity might emerge from a quantum theory. In Einstein’s theory of General Relativity, space-time and gravity are intertwined. Consequently, the two possibilities are as follows.

i. Space-time is not emergent. Classical General Relativity emerges from an underlying quantum field theory describing fields at small length scales, probably comparable to the Planck length.

ii. Space-time emerges from some underlying granular structure. In some limit, classical gravity emerges with the space-time continuum. 

Third, there are "bottom-up" and "top-down" approaches to discovering how classical gravity emerges from an underlying quantum theory, as was emphasised by Bei Lok Hu.

Finally, there is the possibility that quantum theory itself is emergent, as discussed in an earlier post about the quantum measurement problem. Some proposals of Emergent Quantum Mechanics (EQM) attempt to include gravity.

I now mention several different approaches to quantum gravity and for each point out how they fit into the distinctions above.

Gravitons and semi-classical theory

A simple bottom-up approach is to start with classical General Relativity and consider gravitational waves as the normal modes of oscillation of the space-time continuum. They have a linear dispersion relation and move with the speed of light. They are analogous to sound waves in an elastic medium. Semi-classical quantisation of gravitational waves leads to gravitons which are a massless spin-2 field. They are the analogue of phonons in a crystal or photons in the electromagnetic vacuum. However, this reveals nothing about an underlying quantum theory, just as phonons with a linear dispersion relation reveal nothing about the underlying crystal structure.

On the other hand, one can start with a massless spin-2 quantum field and consider how it scatters off massive particles. In the 1960s, Weinberg showed that gauge invariance of the scattering amplitudes implied the equivalence principle (inertial and gravitational mass are identical) and the Einstein field equations. In a sense, this is a top-down approach, as it is a derivation of General Relativity from an underlying quantum theory. In passing, I mention Weinberg used a similar approach to derive charge conservation and Maxwell’s equations of classical electromagnetism, and classical Yang-Mills theory for non-abelian gauge fields. 

Weinberg pointed out that this could go against his reductionist claim that in the hierarchy of the sciences, the arrows of the explanation always point down, saying “sometimes it isn't so clear which way the arrows of explanation point… Which is more fundamental, general relativity or the existence of particles of mass zero and spin two?”

More recently, Weinberg discussed General Relativity as an effective field theory

"... we should not despair of applying quantum field theory to gravitation just because there is no renormalizable theory of the metric tensor that is invariant under general coordinate transformations. It increasingly seems apparent that the Einstein–Hilbert Lagrangian √gR is just the least suppressed term in the Lagrangian of an effective field theory containing every possible generally covariant function of the metric and its derivatives..."

This is a bottom-up approach. Weinberg then went on to discuss a top-down  approach:

“it is usually assumed that in the quantum theory of gravitation, when Λ reaches some very high energy, of the order of 10^15 to 10^18 GeV, the appropriate degrees of freedom are no longer the metric and the Standard Model fields, but something very different, perhaps strings... But maybe not..."

String theory 

Versions of string theory from the 1980s aimed to unify all four forces. They were formulated in terms of nine spatial dimensions and a large internal symmetry group, such as SO(32), where supersymmetric strings were the fundamental units. In the low-energy limit, vibrations of the strings are identified with elementary particles in four-dimensional space-time. A particle with mass zero and spin two appears as an immediate consequence of the symmetries of the string theory. Hence, this was originally claimed to be a quantum theory of gravity. However, subsequent developments have found that there are many alternative string theories and it is not possible to formulate the theory in terms of a unique vacuum.

AdS-CFT correspondence

In the context of string theory, this correspondence conjectures a connection (a dual relation) between classical gravity in Anti-deSitter space-time (AdS) and quantum conformal field theories (CFTs), including some gauge theories. This connection could be interpreted in two different ways. One is that space-time emerges from the quantum theory. Alternatively, the quantum theory emerges from the classical gravity theory.   This ambiguity of interpretation has been highlighted by Alyssa Ney, a philosopher of physics. In other words, it is ambiguous which of the two sides of the duality is the more fundamental. Witten has argued that AdS-CFT suggests that gauge symmetries are emergent. However, I cannot follow his argument.

Seiberg reviewed different approaches, within the string theory community, that lead to spacetime as emergent. An example of a toy model is a matrix model for quantum mechanics [which can be viewed as a zero-dimensional field theory]. Perturbation expansions can be viewed as discretised two-dimensional surfaces. In a large N limit, two-dimensional space and general covariance (the starting point for general relativity) both emerge. Thus, this shows how both two-dimensional gravity and spacetime can be emergent. However, this type of emergence is distinct from how low-energy theories emerge. Seiberg also notes that there are no examples of toy models where time (which is associated with locality and causality) is emergent.

Loop quantum gravity 

This is a top-down approach where both space-time and gravity emerge together from a granular structure, sometimes referred to as "spin foam" or a “spin network”, and has been reviewed by Rovelli. The starting point is Ashtekar’s demonstration that General Relativity can be described using the phase space of an SU(2) Yang-Mills theory. A boundary in four-dimensional space-time can be decomposed into cells and this can be used to define a dual graph (lattice) Gamma. The gravitational field on this discretised boundary is represented by the Hilbert space of a lattice SU(2) Yang-Mills theory. The quantum numbers used to define a basis for this Hilbert space are the graph Gamma,  the “spin” [SU(2) quantum number] associated with the face of each cell, and the volumes of the cells. The Planck length limits the size of the cells. In the limit of the continuum and then of large spin, or vice versa, one obtains General Relativity.

Quantum thermodynamics of event horizons

A bottom-up approach was taken by Padmanabhan. He emphasises Boltzmann's insight: "matter can only store and transfer heat because of internal degrees of freedom". In other words, if something has a temperature and entropy then it must have a microstructure. He does this by considering the connection between event horizons in General Relativity and the temperature of the thermal radiation associated with them. He frames his research as attempting to estimate Avogadro’s number for space-time.

The temperature and entropy associated with event horizons has been calculated for the following specific space-times:

a. For accelerating frames of reference (Rindler space-time) there is an event horizon which exhibits Unruh radiation with a temperature that was calculated by Fulling, Davies and Unruh.

b. The black hole horizon in the Schwarzschild metric has the temperature of Hawking radiation.

c. The cosmological horizon in deSitter space is associated with a temperature proportional to the Hubble constant H, as discussed in detail by Gibbons and Hawking.

Padmanabhan considers the number of degrees of freedom on the boundary of the event horizon, Ns, and in the bulk, Nb. He argues for the holographic principle that Ns = Nb. On the boundary surface, there is one degree of freedom associated with every Planck area, Ns = A/Lp2, where Lp is the Planck length and A is the surface area, which is related to the entropy of the horizon, as first discussed by Bekenstein and Hawking. In the bulk, classical equipartition of energy is assumed so the bulk energy E = Nb k T/2. 

Padmanabhan gives an alternative perspective on cosmology through a novel derivation of the dynamic equations for the scale factor R(t) in the Friedmann-Robertson-Walker metric of the universe in General Relativity. His starting point is a simple argument leading to 

V is the Hubble volume, 4\pi/3H^3, where H is the Hubble constant, and Lp is the Planck length. The right-hand side is zero for the deSitter universe, which is predicted to be the asymptotic state of our current universe.

He presents an argument that the cosmological constant is related to the Planck length, leading to the expression  

where mu is of order unity and gives a value consistent with observation.

Tuesday, May 20, 2025

The triumphs of lattice gauge theory

When first proposed by Ken Wilson in 1974, lattice gauge theory was arguably a toy model, i.e., an oversimplification. He treated space-time as a discrete lattice purely to make analysis more tractable. Borrowing insights and techniques from lattice models in statistical mechanics, Wilson could then argue for quark confinement, showing that the confining potential was linear with distance.

Earlier, in 1971, Wegner had proposed a Z2 gauge theory in the context of generalised Ising models in statistical mechanics to show how a phase transition was possible without a local order parameter, i.e., without symmetry breaking. Later, it was shown that the phase transition is similar to the confinement-deconfinement phase transition that occurs in QCD. [A nice review from 2014 by Wegner is here]. This work also provided a toy model to illustrate the possibility of a quantum spin liquid.

Perhaps, what was not anticipated was that lattice QCD could be used to calculate accurately properties of elementary particles.

The discrete nature of lattice gauge theory means it is amenable to numerical simulation. It is not necessary to have the continuum limit of real spacetime because of universality. Due to increases in computational power over the past 50 years and innovations in algorithms, lattice QCD can be used to calculate properties of nucleons and mesons, such as mass and decay rates, with impressive accuracy. The figure below is taken from a 2008 article in Science. 

The mass of three mesons is typically used to fix the mass of the light and strange quarks and the length scale. The mass of nine other particles, including the nucleon, is calculated with an uncertainty of less than one per cent, and in agreement with experimental values.

An indication that this is a strong coupling problem is that about 95 per cent of the mass of nucleons comes from the interactions. Only about 5 per cent is from the rest mass of the constituent quarks.

For more background on computational lattice QCD, there is a helpful 2004 Physics Today article, which drew a critical response from Herbert Neuberger. A recent (somewhat) pedagogical review by Sasa Prelovsek just appeared on the arXiv.


Tuesday, May 6, 2025

Characteristics of static disorder can emerge from electron-phonon interactions

Electronic systems with large amounts of static disorder can exhibit distinct properties, including localisation of electronic states and sub-gap band tails in the density of states and electronic absorption. 

Eric Heller and collaborators have recently published a nice series of papers that show how these properties can also appear, at least on sufficiently long time scales, in the absence of disorder, due to the electron-phonon interaction. On a technical level, a coherent state representation for phonons is used. This provides a natural way of taking a classical limit, similar to what is done in quantum optics for photons. Details are set out in the following paper 

Coherent charge carrier dynamics in the presence of thermal lattice vibrations, Donghwan Kim, Alhun Aydin, Alvar Daza, Kobra N. Avanaki, Joonas Keski-Rahkonen, and Eric J. Heller

This work brought back memories from long ago when I was a postdoc with John Wilkins. I was puzzled by several related things about quasi-one-dimensional electronic systems, such as polyacetylene, that underwent a Peierls instability. First, the zero-point motion of the lattice was comparable to lattice dimerisation that produced an energy gap at the Fermi energy. Second, even in clean systems, there was a large subgap optical absorption. Third, there was no sign of the square-root singularity expected in the density of states, predicted by theories which treated the lattice classically, i.e., calculated electronic properties in the Born-Oppenheimer approximation.

I found that on the energy scales relevant to the sub-gap absorption, the phonons could be treated like static disorder and make use of known exact results for one-dimensional Dirac equations with random disorder. This explained the puzzles.

Effect of Lattice Zero-Point Motion on Electronic Properties of the Peierls-Fröhlich State

The disorder model can also be motivated by considering the Feynman diagrams for the electronic Green's function perturbation expansion in powers of the electron-phonon interaction. In the limit that the phonon frequency is small, all the diagrams become like those for a disordered system, where the strength of the static disorder is given by 

I then teamed up with another postdoc, Kihong Kim, who calculated the optical conductivity for this disorder model.

Universal subgap optical conductivity in quasi-one-dimensional Peierls systems

Two things were surprising about our results. First, the theory agreed well with experimental results for a range of materials, including the temperature dependence. Second,  the frequency dependence had a universal form. Wilkins was clever and persistent at extracting such forms, probably from his experience working on the Kondo problem.

Friday, May 2, 2025

Could quantum mechanics be emergent?

One of the biggest challenges in the foundations of physics is the quantum measurement problem. It is associated with a few key (distinct but related) questions.

i. How does a measurement convert a coherent state undergoing unitary dynamics to a "classical" mixed state for which we can talk about probabilities of outcomes?

ii. Why is the outcome of an individual measurement always definite for the "pointer states" of the measuring apparatus?

iii. Can one derive the Born rule, which gives the probability of a particular outcome?

Emergence of the classical world from the quantum world via decoherence

A quantum system always interacts to some extent with its environment. This interaction leads to decoherence, whereby quantum interference effects are washed out. Consequently, superposition states of the system decay into mixed states described by a diagonal density matrix. A major research goal of the past three decades has been understanding decoherence and the extent to which it does provide answers to the quantum measurement problem. One achievement is that decoherence theory seems to give a mechanism and time scale for the “collapse of the wavefunction” within the framework of unitary dynamics. However, this is not the case because decoherence is not the same as a projection (which is what a single quantum measurement is). Decoherence does not produce definite outcomes but rather statistical mixtures. Decoherence only resolves the issue if one identifies ensembles of measured states with ensembles of the decohered density matrix (the statistical interpretation of quantum mechanics). Thus, it seems decoherence only answers the first question above, but not the last two. On the other hand, Zurek has pushed the decoherence picture further and given a “derivation” of the Born rule within its framework. In other words, decoherence does not solve the quantum measurement problem: measurements always produce definite outcomes.

One approach to solving the problem is to view quantum theory as only an approximate theory. In particular, it could be an effective theory for some underlying theory valid at time and length scales much smaller than those for which quantum theory has been precisely tested by experiments. 

Emergence of quantum field theory from a “classical” statistical theory

Einstein did not accept the statistical nature of quantum theory and considered it should be derivable from a more “realistic” theory. In particular, he suggested “a complete physical description, the statistical quantum theory would …. take an approximately analogous position to the statistical mechanics within the framework of classical mechanics.”

Einstein's challenge was taken up in a concrete and impressive fashion by Stephen Adler in a book, “Quantum Theory as an Emergent Phenomenon: The Statistical Mechanics of Matrix Models as the Precursor of Quantum Field Theory”, published in 2004.  A helpful summary is given in a review by Pearle.

The starting point is "classical" dynamical variables qr and pr which are NxN matrices, where N is even. Half of these variables are bosonic, and the others are fermionic. They all obey Hamilton's equations of motion for an unspecified Hamiltonian H. Three quantities are conserved: H, the fermion number N, and (very importantly) the traceless anti-self-adjoint matrix, 

where the first term is the sum for all the bosonic variables of their commutator, and the second is the sum over anti-commutators for the fermionic variables.

Quantum theory is obtained by tracing over all the classical variables with respect to a canonical ensemble with three (matrix) Lagrange multipliers [analogues of temperature and chemical potential in conventional statistical mechanics] corresponding to the conserved quantities H, N, and C. The expectation values of the diagonal elements of C are assumed to all have the same value, hbar!

An analogy of the equipartition theorem in classical statistical mechanics (which looks like a Ward identity in quantum field theory) leads to dynamical equations (trace dynamics) for effective fields. To make these equations look like regular quantum field theory, an assumption is made about a hierarchy of length, energy, and "temperature" [Lagrange multiplier] scales, which cause the Trace dynamics to be dominated by C rather than H, the trace Hamiltonian. Adler suggests these scales may be Planck scales. Then, the usual quantum dynamical equations and the Dirac correspondence of Poisson brackets and commutators emerge. Most of the actual details of the trace Hamiltonian H do not matter; another case of universality, a common characteristic of emergent phenomena.

The “classical” field C fluctuates about its average value. These fluctuations can be identified with corrections to locality in quantum field theory and with the noise terms which appear in the modified Schrodinger equation of "physical collapse" models of quantum theory.

More recently, theorists including Gerard t’Hooft and John Preskill have investigated how quantum mechanics can emerge from other deterministic systems. This is sometimes known as the emergent quantum mechanics (EmQM) hypothesis.

Underlying deterministic systems considered include

Hamilton-Randers systems defined in co-tangent spaces of large-dimensional configuration spaces

neural networks,

cellular automata,

fast-moving classical variables, and the

 boundary of a local classical model with a length that is exponentially large in the number of qubits in the quantum system. 

In most of these versions of EmQM the length scale at which the underlying theory becomes relevant is conjectured to be of the order of the Planck length.

The fact that quantum theory can emerge from such a diverse range of underlying theories again illustrates universality.

The question of quantum physics emerging from an underlying classical theory is not just a question in the foundations of physics or in philosophy. Slagle points out that Emergent Quantum Mechanics may mean that the computational power of quantum computers is severely limited. He has proposed a specific experimental protocol to test for EmQM. A large number d of entangling gates (the circuit depth d) are applied to n qbits in the computational basis, followed by the inverse gates. This is followed by measurement in the computational basis. The fidelity should decay exponentially with d, whereas for EmQM will decay much faster above some critical d, for sufficiently large n.

Independent of experimental evidence, EmQM provides an alternative interpretation to quantum theory that avoids the thorny issues such as the many-worlds interpretation.

Lamenting the destruction of science in the USA

I continue to follow the situation in the USA concerning the future of science with concern. Here are some of the articles I found most info...