Showing posts with label economics. Show all posts
Showing posts with label economics. Show all posts

Wednesday, August 13, 2025

My review article on emergence

I just posted on the arXiv a long review article on emergence

Emergence: from physics to biology, sociology, and computer science

The abstract is below.

I welcome feedback. 

------

Many systems of interest to scientists involve a large number of interacting parts and the whole system can have properties that the individual parts do not. The system is qualitatively different to its parts. More is different. I take this novelty as the defining characteristic of an emergent property. Many other characteristics have been associated with emergence are reviewed, including universality, order, complexity, unpredictability, irreducibility, diversity, self-organisation, discontinuities, and singularities. However, it has not been established whether these characteristics are necessary or sufficient for novelty. A wide range of examples are given to show how emergent phenomena are ubiquitous across most sub-fields of physics and many areas of biology and social sciences. Emergence is central to many of the biggest scientific and societal challenges today. Emergence can be understood in terms of scales (energy, time, length, complexity) and the associated stratification of reality. At each stratum (level) there is a distinct ontology (properties, phenomena, processes, entities, and effective interactions) and epistemology (theories, concepts, models, and methods). This stratification of reality leads to semi-autonomous scientific disciplines and sub-disciplines. A common challenge is understanding the relationship between emergent properties observed at the macroscopic scale (the whole system) and what is known about the microscopic scale: the components and their interactions. A key and profound insight is to identify a relevant emergent mesoscopic scale (i.e., a scale intermediate between the macro- and micro- scales) at which new entities emerge and interact with one another weakly. In different words, modular structures may emerge at the mesoscale. Key theoretical methods are the development and study of effective theories and toy models. Effective theories describe phenomena at a particular scale and sometimes can be derived from more microscopic descriptions. Toy models involve minimal degrees of freedom, interactions, and parameters. Toy models are amenable to analytical and computational analysis and may reveal the minimal requirements for an emergent property to occur. The Ising model is an emblematic toy model that elucidates not just critical phenomena but also key characteristics of emergence. Many examples are given from condensed matter physics to illustrate the characteristics of emergence. A wide range of areas of physics are discussed, including chaotic dynamical systems, fluid dynamics, nuclear physics, and quantum gravity. The ubiquity of emergence in other fields is illustrated by neural networks, protein folding, and social segregation. An emergent perspective matters for scientific strategy, as it shapes questions, choice of research methodologies, priorities, and allocation of resources. Finally, the elusive goal of the design and control of emergent properties is considered.

Friday, January 3, 2025

Self-organised criticality and emergence in economics

A nice preprint illustrates how emergence is central to some of the biggest questions in economics and finance. Emergent phenomena occur as many economic agents interact resulting in a system with properties that the individual agents do not have.

The Self-Organized Criticality Paradigm in Economics & Finance

Jean-Philippe Bouchaud

The paper illustrates several key characteristics of emergence (novel properties, universality, unpredictability, ...) and the value of toy models in elucidating it. Furthermore, it illustrates the elusive nature of the "holy grail" of controlling emergent properties. 

The basic idea of self-organised criticality

"The seminal idea of Per Bak is to think of model parameters themselves as dynamical variables, in such a way that the system spontaneously evolves towards the critical point, or at least visits its neighbourhood frequently enough"

A key property of systems exhibiting criticality is power laws in the probability distribution of a property. This means that there are "fat tails" in the probability distribution and extreme events are much more likely than in a system with a Gaussian probability distribution.

Big questions

The two questions below are similar in that they concern the puzzle of how markets produce fluctuations that are much larger than expected when one tries to explain their behaviour in terms of the choices of individual agents.

A big question in economics

"A longstanding puzzle in business cycle analysis is that large fluctuations in aggregate economic activity sometimes arise from what appear to be relatively small impulses. For example, large swings in investment spending and output have been attributed to changes in monetary policy that had very modest effects on long-term real interest rates."

This is the "small shocks, large business cycle puzzle", a term coined by Ben Bernanke, Mark Gertler and Simon Gilchrist in a 1996 paper. It begins with the paragraph above. [Bernanke shared the 2022 Nobel Prize in Economics for his work on business cycles].

A big question in finance

The excess volatility puzzle in financial markets was identified by Robert Shiller: The volatility "is at least five times larger than it "should" be in the absence of feedback". In the views of some, this puzzle highlights the failings of the efficient market hypothesis and the rationality of investors, two foundations of neoclassical economics. [Shiller shared the 2013 Nobel Prize in Economics for this work]. 

"Asset prices frequently undergo large jumps for no particular reason, when financial economics asserts that only unexpected news can move prices. Volatility is an intermittent, scale-invariant process that resembles the velocity field in turbulent flows..." (page 2)

Emergent properties

Close to a critical point, the system is characterised by fat-tailed fluctuations and long memory correlations.

Avalanches. They allow very small perturbations to generate large disruptions.

Dragon Kings

Minsky moment

The holy grail: control of emergent properties

It would be nice to understand superconductivity well enough  to design a room-temperature superconductor. But, this pales in significance compared to the "holy grail" of being about to manage economic markets to prevent bubbles, crashes, and recessions.

Bouchaud argues that  the quest for efficiency and the necessity of resilience may be mutually incompatible. This is because markets may tend towards self-organised criticality which is characterised by fragility and unpredictability (Black swans).

The paper has the following conclusion

"the main policy consequence of fragility in socio-economic systems is that any welfare function that system operators, policy makers of regulators seek to optimize should contain a measure of the robustness of the solution to small perturbations, or to the uncertainty about parameters value.

Adding such a resilience penalty will for sure increase costs and degrade strict economic performance, but will keep the solution at a safe distance away from the cliff edge. As argued by Taleb [159], and also using a different language in Ref. [160], good policies should ideally lead to “anti-fragile” systems, i.e., systems that spontaneously improve when buffeted by large shocks."

Toy models

Toy models are key to understanding emergent phenomena. They ignore almost all details to the point that critics claim that the models are oversimplified. The modest goal of their proponents is simply to identify what ingredients may be essential for a phenomenon to occur. Bouchaud reviews several such models. All provide significant insight.

A trivial example (Section 2.1)

He considers an Ornstein-Uhlenbeck process for a system relaxing to equilibrium. As the damping rate tends to zero [κ⋆ → 0], the relaxation time and the variance of fluctuations diverge at the same rate. In other words, "in the limit of marginal stability κ⋆ →0, the system both amplifies exogenous shocks [i.e., those originating outside the system] and becomes auto-correlated over very long time scales."

The critical branching transition (Section 2.2)

The model describes diverse systems: "sand pile avalanches, brain activity, epidemic propagation, default/bankruptcy waves, word of mouth,..."

The model involves the parameter R0 which became famous during the COVID-19 pandemic. R0 is the average number of uninfected people who become infected due to contact with an infected individual. For sand piles R0 is the average number of grains that start rolling in response to a single rolling grain.

when R0 = 1 the distribution of avalanche sizes is a scale-free, power-law distribution 1/S^3/2, with infinite mean.

"most avalanches are of small size, although some can be very large. In other words, the system looks stable, but occasionally goes haywire with no apparent cause."

A generalised Lotka-Volterra model (Sections 3.3 and 4.2) 

This provides an analogue between economic production networks and ecology. Last year I reviewed recent work on this model, concerning how to understand the interplay of evolution and ecology.

A key result is how in the large N limit (i.e., a large number of interacting species/agents) qualitatively different behaviour occurs. Ecosystems and economies can collapse. 

 "any small change in the fitness of one species can have dramatic consequences on the whole system – in the present case, mass extinctions...

"most complex optimisation systems are, in a sense, fragile, as the solution to the optimisation problem is highly sensitive to the precise value of the parameters of the specific instance one wants to solve, like the Aij entries in the Lotka-Volterra model. Small changes of these parameters can completely upend the structure of the optimal state, and trigger large-scale rearrangements,..." 

Balancing stick problem (Section 3.4)

 The better one is able to stabilize the system, the more difficult it becomes to predict its future evolution! 

Propagation of production delays along the supply chain (Section 4.1)


An agent-based firm network model (Section 4.3)

This has the phase diagram shown below. The horizontal axis is the strength of forces counteracting supply/demand and profit imbalances. The vertical axis is the perishability of goods.

There are four distinct phases.

Leftmost region (a, violet): the economy collapses; 

Middle region (b, blue): the economy reaches equilibrium relatively quickly;

Right region (c, yellow): the economy is in perpetual disequilibrium, with purely endogenous fluctuations. 

The green vertical sliver (d) corresponds to a deflationary equilibrium

Phase diagrams illustrate how quantitative changes can produce qualitative differences.

Universality

The toy models considered describe emergent phenomena in diverse systems, including in fields other than economics and finance. 

Here are a few other recent papers by Bouchaud that are relevant to this discussion.

Navigating through Economic Complexity: Phase Diagrams & Parameter Sloppiness

From statistical physics to social sciences: the pitfalls of multi-disciplinarity

This includes the opening address from a workshop on "More is Different" at the College de France in 2022.

Thursday, September 26, 2024

The multi-faceted character of emergence (part 2)

In the previous post, I considered five different characteristics that are often associated with emergence and classified them as being associated with ontology (what is real and observable) rather than epistemology (what we believe to be true). 

Below I consider five more characteristics: self-organisation, unpredictability, irreducibility, contextuality and downward causation, and intra-stratum closure.

6. Self-organisation

Self-organisation is not a property of the system but a mechanism that a theorist says causes an emergent property to come into being. Self-organisation is also referred to as spontaneous order. 

In the social sciences self-organisation is sometimes referred to as an endogenous cause, in contrast to an exogenous cause. There is no external force or agent causing the order, in contrast to order that is imposed externally. For example, suppose that in a city there is no government policy about the price of a loaf of sliced wholemeal bread or on how many loaves that bakers should produce. It is observed that prices are almost always in the range of $4 to $5 per loaf, and that rarely are there bread shortages. This outcome is a result of the self-organisation of the free-market, and economists would say the price range and its stability has an endogenous cause. In contrast, if the government legislated the price range and the production levels that would be an exogenous cause. Friedrich Hayek emphasised the role of spontaneous order in economics. In biology, Stuart Kaufmann equates emergence with spontaneous order and self-organisation.

In physics, the periodicity of the arrangement of atoms in a crystal is a result of self-organisation and has an endogenous cause. In contrast, the periodicity of atoms in an optical lattice is determined by the laser physicist who creates the lattice and so has an exogenous cause.

Self-organisation shows how local interactions can produce global properties. In different words, short-range interactions can lead to long-range order. After decades of debate and study, the Ising model showed that this was possible. Other examples of self-organisation, include flocking of birds and teamwork in ant colonies. There is no director or leader but the system acts “as if” there is. 

7. Unpredictability

Ernst Mayr (This is Biology, p.19) defines emergence as “in a structured system, new properties emerge at higher levels of integration that could not have been predicted from a knowledge of the lower-level components.” Philip Ball also defines emergence in terms of unpredictability (Quanta, 2024).

More broadly, in discussions of emergence, “prediction” is used in three different senses: logical prediction, historical prediction, and dynamical prediction.

Logical prediction (deduction) concerns whether one can predict (calculate) the emergent (novel) property of the whole system solely from a knowledge of all the properties of the parts of the system and their interactions. Logical predictability is one of the most contested characteristics of emergence. Sometimes “predict” is replaced with “difficult to predict”, “extremely difficult to predict”, “impossible to predict”, “almost impossible to predict”, or “possible in principle, but impossible in practice, to predict.” 

As an aside, I note that philosophers distinguish between epistemological emergence and ontological emergence. They are associated with prediction that is "possible in principle, but difficult in practice" and "impossible in principle" respectively.

After an emergent property has been discovered experimentally sometimes it can be understood in terms of the properties of the system parts. In a sense “pre-diction” then becomes “post-diction.” An example is the BCS theory of superconductivity, which provided a posteriori, rather than a priori, understanding. In different words, development of the theory was guided by a knowledge of the phenomena that had already been observed and characterised experimentally. Thus, a keyword in the statement above about logical prediction is “solely”. 

Historical prediction. Most new states of matter discovered by experimentalists were not predicted even though theorists knew the laws that the microscopic components of the system obeyed. Examples include superconductivity (elemental metals, cuprates, iron pnictides, organic charge transfer salts, …), superfluidity in liquid 4He, antiferromagnetism, quasicrystals, and the integer and fractional quantum Hall states.

There are a few exceptions where theorists did predict new states of matter. These include are Bose-Einstein Condensates (BECs) in dilute atomic gases and topological insulators, the Anderson insulator in disordered metals, the Haldane phase in even-integer quantum antiferromagnetic spin chains, and the hexatic phase in two dimensions. It should be noted that prediction of BECs and topological insulators were significantly helped that theorists could predict them starting with Hamiltonians of non-interacting particles. Furthermore, all of these predictions involved working with effective Hamiltonians. None started with microscopic Hamiltonians for specific materials.

Dynamical unpredictability concerns what it means in chaotic dynamical systems, where it relates to sensitivity to initial conditions. I do not see this as an example of emergence as it can occur in systems with only a few degrees of freedom. However, some authors do associate dynamical unpredictability with complexity and emergence.

8. Irreducibility and singularities

An emergent property cannot be reduced to properties of the parts, because if emergence is defined in terms of novelty, the parts do not have the property. 

Emergence is also associated with the problem of theory reduction. Formally, this is the process where a more general theory reduces in a particular mathematical limit to a less general theory. For example, quantum mechanics reduces to classical mechanics in the limit where Planck’s constant goes to zero. Einstein’s theory of special relativity reduces to Newtonian mechanics in the limit where the speeds of massive objects become much less than the speed of light. Theory reduction is a subtle philosophical problem that is arguably poorly understood both by scientists [who oversimplify or trivialise it] and philosophers [who arguably overstate the problems it presents for science producing reliable knowledge]. Subtleties arise because the two different theories usually involve language and concepts that are "incommensurate" with one another. 

Irreducibility is also related to the discontinuities and singularities associated with emergent phenomena. As emphasised independently by Hans Primas and Michael Berry, singularities occur because the mathematics of theory reduction involves singular asymptotic expansions. Primas illustrates this by considering a light wave incident on an object and producing a shadow. The shadow is an emergent property, well described by geometrical optics, but not by the more fundamental theory of Maxwell’s electromagnetism. The two theories are related in the asymptotic limit that the wavelength of light in Maxwell’s theory tends to zero. This example illustrates that theory reduction is compatible with the emergence of novelty. Primas also considers how the Born-Oppenheimer approximation, which is central to solid state theory and quantum chemistry, is associated with a singular asymptotic expansion (in the ratio of the mass of an electron to the mass of an atomic nuclei in the system). 

Berry considers several other examples of theory reduction, including going from general to special relativity, from statistical mechanics to thermodynamics, and from viscous (Navier-Stokes) fluid dynamics to inviscid (Euler) fluid dynamics. He has discussed in detail how the caustics that occur in ray optics are an emergent phenomena and are associated with singular asymptotic expansions in the wave theory.

The philosopher of science Jeremy Butterfield showed rigorously that theory reduction occurred for four specific systems that exhibited emergence, defined by him as a novel and robust property. Thus, novelty is not sufficient for irreducibility.

9. Contextuality and downward causation

Any real system has a context. For example, it has boundary and an environment, both in time and space. In many cases the properties of the system are completely determined by the parts of the system and their interactions. Previous history and boundaries do not matter. However, in some cases the context may have a significant influence on the state of the system. An example is Rayleigh-Bernard convection cells and turbulent flow whose existence and nature are determined by the interaction of the fluid with the container boundaries. A biological example concerns what factors determine the structure, properties, and function that a particular protein (linear chain of amino acids) has. It is now known that the only factor is not just the DNA sequence that encodes for the amino acid sequence, in contradiction to some versions of the Central Dogma of molecular biology.  Other factors may be the type of cell that contains the protein and the network of other proteins in which the particular protein is embedded. Context sometimes matters.

Supervenience is the idea that once the micro level is fixed, macro levels are fixed too. The examples above might be interpreted as evidence against supervenience. Supervenience is used to argue against “the possibility for mental causation above and beyond physical causation.” 

Downward causation is sometimes equated with emergence, particularly in debates about the nature of consciousness. In the context of biology, Denis Noble defines downward causation as when higher level processes can cause changes in lower level properties and processes. He gives examples where physiological effects can switch on and off individual genes or signalling processes in cells, including maternal effects and epigenetics.

10. Intra-stratum closure: informational, causal, and computational

The ideas described below were recently developed by Rosas et al. from a computer science perspective. They defined emergence in terms of universality and discussed its relationship to informational closure, causal closure, and computational closure. Each of these are given a precise technical definition in their paper. Here I give the sense of their definitions. In considering a general system they do not pre-define the micro- and macro- levels of a system but consider how they might be defined so that universality holds, i.e., so that properties at the macro-level are independent of the details of the micro-level (i.e., are universal).

Informational closure means that to predict the dynamics of the system at the macroscale an observer does not need any additional information about the details of the system at the microscale. Equilibrium thermodynamics and fluid dynamics are examples. 

Causal closure means that the system can be controlled at the macroscale without any knowledge of lower-level information. For example, changing the software code that is running on a computer allows one to reliably control the microstate of the hardware of the computer regardless of what is happening with the trajectories of electrons in the computer.

Computational closure is a more technical concept, being defined in terms of “a conceptual device called the ε-(epsilon) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one... for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, “strongly lumpable.” "

Rosas et al., show that informational closure and causal closure are equivalent and that they are more restrictive than computational closure. It is not clear to me how these closures relate to novelty as a definition of emergence.

In summary, emergence means different things to different people. I have listed ten different characteristics that have been associated with emergent properties. They are not all equivalent and so when discussing emergence it is important to be clear about which characteristic one is using to define emergence.

Tuesday, September 24, 2024

The multi-faceted character of emergence (part 1)

There is more to emergence than novel properties, i.e., where a whole system has a property that the individual components of the system do not have. Here I focus on emergent properties, but in most cases “property” might be replaced with state, phenomenon, or entity. I now discuss ten characteristics often associated with emergence, beyond novelty. Some people include one or more of these characteristics in their definitions of emergence. However, I do not include them in my definition because as I explain some of the characteristics are contentious. Some may not be necessary or sufficient for novel system properties.

The first five characteristics discussed below might be classified as objective (i.e., observable properties of the system) and the second five as subjective (i.e., associated with how an investigator thinks about the system). In different words, the first five are mostly concerned with ontology (what is real) and the second five with epistemology (what we know). The first five characteristics concern discontinuities, universality, diversity, mesoscales, and modification of parts. The second five concern self-organisation, unpredictability, irreducibility, downward causation, and closure. 

1. Discontinuities 

Quantitative changes in the system can become qualitative changes in the system. For example, in condensed matter physics spontaneous symmetry breaking only occurs in the thermodynamic limit (i.e., when the number of particles of the system becomes infinite). More is different. Thus, as a quantitative change in the system size occurs the order parameter becomes non-zero. In a system that undergoes a phase transition at a non-zero temperature, a small change in temperature can lead to the appearance of order and to a new state of matter. For a first-order phase transition, there is discontinuity in properties such as the entropy and density. These discontinuities define a phase boundary in the pressure-temperature diagram. For continuous phase transitions the order parameter is a continuous function of temperature, becoming non-zero at the critical temperature. However the derivative with respect to temperature may be discontinuous and/or thermodynamic properties such as the specific heat and susceptibility associated with the order parameter may approach infinite as the critical temperature is approached.

Two different states of a system are said to be adiabatically connected if one can smoothly deform one state into the other and all the properties of the system also change smoothly. The case of the liquid-gas transition illustrates subtle issues about defining emergence. A discontinuity does not imply a qualitative difference (novelty). On the one hand, there is a discontinuity in the density and entropy of the system as the liquid-gas phase boundary is crossed in the pressure-temperature diagram. On the other hand, there is no qualitative difference between a gas and a liquid. There is only a quantitative difference: the density of the gas is less than the liquid. Albeit sometimes the difference is orders of magnitude. The liquid and gas state can be adiabatically connected. There is a path in the pressure-temperature phase diagram that can be followed to connect the liquid and gas states without any discontinuities in properties.

The ferromagnetic state also raises questions, as illustrated by a debate between Rudolf Peierls and Phil Anderson about whether ferromagnetism exhibits spontaneous symmetry breaking. Anderson argued that it did not as, in contrast to the antiferromagnetic state, a non-zero magnetisation (order parameter) occurs for finite systems and the magnetic order does not change the excitation spectrum, i.e., produce a Goldstone boson. On the other hand, singularities in properties at the Curie temperature (critical temperature for ferromagnetism) only exist in the thermodynamic limit. Also, a small change in the temperature, from just above the Curie temperature to below, can produce a qualitative change, a non-zero magnetisation.

2. Universality

Properties often referred to as emergent are universal in the sense that it is independent of many of the details of the parts of the system. There may be many different systems that can have a particular emergent property. For example, superconductivity is present in metals with a diverse range of crystal structures and chemical compositions. 

Robustness is related to universality. If small changes are made to the composition of the system (for example replacing some of the atoms in the system with atoms of different chemical element) the novel property of the system is still present. In elementary superconductors, introducing non-magnetic impurity atoms has no effect on the superconductivity.

Universality is both a blessing and a curse for theory. Universality can make it easier to develop successful theories because it means that many details need not be included in a theory in order for it to successfully describe an emergent phenomenon. This is why effective theories and toy models can work even better than might be expected. Universality can make theories more powerful because they can describe a wider range of systems. For example, properties of elemental superconductors can be described by BCS theory and by Ginzburg-Landau theory, even though the materials are chemically and structurally diverse. The curse of universality for theory is that universality illustrates the problem of “under-determination of theory”, “over-fitting of data” and “sloppy theories” [Sethna et al.]. A theory can agree with the experiment even when the parameters used in the theory may be quite different from the actual ones. For example, the observed phase diagram of water can be reproduced, sometimes with impressive quantitative detail, by combining classical statistical mechanics with empirical force fields that assume water molecules can be treated purely being composed of point charges.

Suppose we start with a specific microscopic theory and calculate the macroscopic properties of the system, and they agree with experiment. It would then be tempting to think that we have the correct microscopic theory. However, universality suggests this may not be the case.

For example, consider the case of a gas of weakly interacting atoms or molecules. We can treat the gas particles as classical or quantum. Statistical mechanics gives exactly the same equation of state and specific heat capacity for both microscopic descriptions. The only difference may be the Gibbs paradox [the calculated entropy is not an extensive quantity] which is sensitive to whether or not the particles are treated as identical or not. Unlike the zeroth, first, and second law of thermodynamics, the third law does require that the microscopic theory be quantum. Laughlin discusses these issues in terms of “protectorates” that hide “ultimate causes” .  

In some physical systems, universality can be defined in a rigorous technical sense, making use of the concepts and techniques of the renormalisation group and scaling. These techniques provide a method to perform coarse graining, to derive effective theories and effective interactions, and to define universality classes of systems. There are also questions of how universality is related to the robustness of strata, and the independence of effective theories from the coarse-graining procedure.

3. Diversity

Even when a system is composed of a small number of different components and interactions, the large number of possible stable states with qualitatively different properties that the system can have is amazing. Every snowflake is different. Water is found in 18 distinct solid states. All proteins are composed of linear chains of 20 different amino acids. Yet in the human body there are more than 100,000 different proteins and all perform specific biochemical functions. We encounter an incredible diversity of human personalities, cultures, and languages. A stunning case of diversity is life on earth. Billions of different plant and animal species are all an expression of different linear combinations of the four base pairs of DNA: A, G, T, and C.

This diversity is related to the idea that "simple models can describe complex behaviour". One example is Conway’s Game of Life. Another example is how simple Ising models with a few competing interactions can describe a devil's staircase of ground states or the multitude of different atomic orderings found in binary alloys.

Goldenfeld and Kadanoff defined complexity [emergence] as “structure with variations”. Holland (VSI) discusses “perpetual novelty” giving the example of the game of chess, where are typical game may involve the order of 1050 move sequences. “Motifs” are recurring patterns (sequences of moves) in games. 

Condensed matter physics illustrates diversity with the many different states of matter that have been discovered. The underlying microscopics is “just” electrons and atomic nuclei interacting according to Coulomb’s law.

The significance of this diversity might be downplayed by saying that it is just a result of combinatorics. But such a claim overlooks the issue of the stability of the diverse states that are observed. In a system composed of many components each of which can take on a few states the number of possible states of the whole system grows exponentially with the number of components. For example, for a chain of ten amino acids there are 1013 different possible linear sequences. But this does not mean that all these sequences will produce a functional protein, i.e., a molecule that will fold rapidly (on the timescale of milliseconds) into a stable tertiary structure and perform a useful biochemical function such as catalysis of a specific chemical reaction or signal transduction.

4. Simple entities at the mesoscale 

A key idea in condensed matter physics is that of quasi-particles. A system of strongly interacting particles may have excitations, seen in experiments such as inelastic neutron scattering and Angle Resolved PhotoElectron Spectroscopy (ARPES), that can be described as weakly interacting quasi-particles. These entities are composite particles, and have properties that are quantitatively different, and sometimes qualitatively different, from the microscopic particles. Sometimes this means that the scale (size) associated with the quasi-particles is intermediate between the micro- and the macro-scales, i.e., it is a mesoscale. The existence of quasi-particles leads naturally to the technique of constructing an effective Hamiltonian [effective theory] for the system where effective interactions describe the interactions between the quasi-particles.

The economist Herbert Simon argued that a characteristic of a complex system is that the system can be understood in terms of nearly decomposable units. Rosas et al., argue that emergence is associated with there being a scale at which the system is “strongly lumpable”. Denis Noble has highlighted how biological systems are modular, i.e., composed of simple interchangeable components.

5. Modification of parts and their relationships

Emergent properties are often associated with the state of the system exhibiting patterns, order, or structure, terms that may be used interchangeably. This reflects that there is a particular relationship (correlation) between the parts which is different to the relationships in a state without the emergent property. This relationship may also be reflected in a generalised rigidity. For example, in a solid applying a force on one surface results in all the atoms in the solid experiencing a force and moving together. The rigidity of the solid defines a particular relationship between the parts of the system.

Properties of the individual parts may also be different. For example, in a crystal single-atom properties such as electronic energy levels change quantitatively compared to their values for isolated atoms. Properties of finite subsystems are also modified, reflecting a change in interactions between the parts. For example, in a molecular crystal the frequencies associated with intramolecular atomic vibrations are different to their values for isolated molecules. However, emergence is a sufficient but not a necessary condition for these modifications. In gas and liquid states, novelty is not present but there are still such changes in the properties of the individual parts.

As stated at the beginning of this section the five characteristics above might be associated with ontology (what is real) and objective properties of the system that an investigator observes and depend less on what an observer thinks about the system. The next five characteristics might be considered to be more subjective, being concerned with epistemology (how we determine what is true). In making this dichotomy I do not want to gloss over the fuzziness of the distinction or of two thousand years of philosophical debates about the relationship between ontology and epistemology, or between reality and theory.

In the next post, I will discuss the remaining five characteristics: self-organisation, unpredictability, irreducibility, contextuality and downward causation, and intra-stratum closure.

Thanks for reading this far!

Friday, June 21, 2024

10 key ideas about emergence

Consider a system comprised of many interacting components. 

1. Many different definitions of emergence have been given. I take the defining characteristic of an emergent property of the system is novelty, i.e, the individual components of the system do not have this property.

2. Many other characteristics have been associated with emergence, such as universality, unpredictability, irreducibility, diversity, self-organisation, discontinuities, and singularities. However, it has not been established whether these characteristics are necessary or sufficient for novelty.

3. Emergent properties are ubiquitous across scientific disciplines from physics to biology to sociology to computer science. Emergence is central to many of the biggest scientific challenges today and some of the greatest societal problems.

4. Reality is stratified. A key concept is that of strata or hierarchies. At each level or stratum,  there is a distinct ontology (properties, phenomena, processes, entities, and effective interactions) and epistemology (theories, concepts, models, and methods). This stratification of reality leads to semi-autonomous scientific disciplines and sub-disciplines.

5. A common challenge is understanding the relationship between emergent properties observed at the macroscopic scale (whether in societies or in solids) and what is known about the microscopic scale: the components (whether individual humans or atoms) and their interactions. Often a key (but profound) insight is identifying an emergent mesoscopic scale (i.e., a scale intermediate between the macro- and micro- scales) at which new entities emerge and interact with one another weakly.

6. A key theoretical method is the development and study of effective theories and toy models. Effective theories can describe phenomena at the mesoscopic scale and be used to bridge the microscopic and macroscopic scales. Toy models involve just a few degrees of freedom, interactions, and parameters. Toy models are amenable to analytical and computational analysis and may reveal the minimal requirements for an emergent property to occur. The Ising model is a toy model that elucidates critical phenomena and key characteristics of emergence.

7. Condensed matter physics elucidates many of the key features and challenges of emergence. Unlike brains and economies, condensed states of matter are simple enough to be amenable to detailed and definitive analysis but complex enough to exhibit rich and diverse emergent phenomena.

8. The ideas above about emergence matter for scientific strategy in terms of choosing methodologies, setting priorities, and allocating resources.

9. An emergent perspective that does not privilege the parts or the whole can address contentious issues and fashions in the humanities and social sciences, particularly around structuralism.

10. Emergence is also at the heart of issues in philosophy including the nature of consciousness, truth, reality, and the sciences.

Monday, May 13, 2024

The whole is qualitatively different from the parts: beer, birds, and brains

Pint of Science is an annual event in cities all around Australia. Local scientists give short talks about their research to general audiences. I am speaking tonight, along with my colleague Ben Powell. 

I found the tips to speakers very helpful. This led me to try and make the talk more of a personal story, reduce the amount of text on slides, and aim for engagement rather than focusing on scientific details or on technical details of your own research.

Here is the current version of my slides.

The introduction is based on this video and poem about emergence in economics.

This provides an example of how "free" economic markets can work well sometimes. But I will also point out that they can also fail spectacularly, another emergent phenomenon! 

Tuesday, March 5, 2024

An illusion of purpose in emergent phenomena?

 A characteristic of emergent phenomena in a system of many interacting parts is that they exhibit collective behaviour where it looks like the many parts are "dancing to the same tune". But who is playing the music, who chose it, and who conducts the orchestra?

Consider the following examples.

1. A large group of starlings move together in what appears to be a coherent fashion. Yet, no lead starling is telling all the starlings how and where to move, according to some clever flight plan to avoid a predator. Studies of flocking [murmuration] have shown that each of the starlings just moves according to the motion of a few of their nearest neighbours. Nevertheless, the flock does move in a coherent fashion "as if" there is a lead starling or air traffic controller making sure all the planes stick to their flight plan.

2. You can buy a freshly baked loaf of bread at a local bakery every day. Why? Thousands of economic agents, from farmers to truck drivers to accountants to the baker, make choices and act based on limited local information. Their interactions are largely determined by the mechanism of prices and commercial contracts. In a market economy, no director of national bread supplies who co-ordinates the actions of all of these agents. Nevertheless, you can be confident that each morning you will be able to buy the loaf you want. The whole system acts in a co-ordinated manner "as if" it has a purpose: to reliably supply affordable high-quality bread.

3. A slime mould spreads over a surface containing food supplies with spatial locations and sizes similar to that of the cities surrounding Tokyo. After a few hours, the spread of the mould has reorganised so that it is focussed on paths that are similar to the routes of the Tokyo rail network. Moulds have no brain or computer chip but they can solve optimisation problems, such as finding the shortest path through a complex maze. In nature, this problem-solving ability has the advantage that it allows them to efficiently locate sources of food and nutrients. Slime moulds act "as if" they have a brain.

A biologist Michael Levin discusses the issue of intelligence in very small and primitive biological systems in a recent article, Collective Intelligence of Morphogenesis as a Teleonomic Process

[I first became aware of Levin's work through a podcast episode brought to my attention by Gerard Milburn. The relevant discussion starts around 36 minutes].

The emphasis on "as if" I have taken from Thomas Schelling in the opening chapter of his beautiful book, Micromotives and Macrobehaviour.

He also mentions the example of Fermat's principle in optics: the path light takes as it travels between two spatially separated points is the path for which the travel time is an extremum [usually a minimum]. The light travels "as if" it has the purpose of finding this extremum. 

[Aside: according to Wikipedia, 

"Fermat's principle was initially controversial because it seemed to ascribe knowledge and intent to nature. Not until the 19th century was it understood that nature's ability to test alternative paths is merely a fundamental property of waves."

Similar issues of knowledge/intent/purpose arise when considering the motion of a classical particle moving between two spatial points. It takes the path for which the value of the action [time integral of the Lagrangian along a path] has an extremal value relative to all possible paths. I suspect that the path integral formulation of quantum theory is required to solve the "as if" problem. Any alternative suggestions?

Tuesday, November 14, 2023

An emergentist perspective on public policy issues that divide

How is the whole related to the parts?

Which type of economy will produce the best outcomes: laissez-faire or regulated?

Can a government end an economic recession by "stimulus" spending?  

What is the relative importance of individual agency and social structures in causing social problems such as poverty and racism?

These questions are all related to the first one. Let's look at it from an emergentist perspective, with reference to physics. 

Consider the Ising model in two or more dimensions. The presence of nearest neighbour interactions between spins leads to emergent properties: long-range ordering of the spins, spontaneous symmetry breaking below the critical temperature, and singularities in the temperature dependence of thermodynamic properties such as the specific heat and magnetic susceptibility. Individual uncoupled spins have neither property. Even a finite number of spins do not. (Although, a large number of spins do exhibit suggestive properties such as an enhancement of the magnetic susceptibility near the critical temperature). Thus, the whole system has properties that are qualitatively different from the parts. 

On the other hand, the properties of the parts, such as how strongly the spins couple to an external field and interact with their neighbours, influence the properties of the whole. Some details of the parts matter. Other details don't matter. Adding some interaction with spins beyond nearest neighbours does not change any of the qualitative properties, provided those longer-range interactions are not too large. On the other hand, changing from a two-dimensional rectangular lattice to a linear chain removes the ordered state. Changing to a triangular lattice with an antiferromagnetic nearest-neighbour interaction removes the ordering and there are multiple ground states. Thus, some microscopic details do matter.

For illustrative purposes, below I show a sketch of the temperature dependence of the magnetic susceptibility of the Ising model for three cases: non-interacting spins (J=0), two dimensions (d=2), and one dimension (d=1). This shows how interactions can significantly enhance/diminish the susceptibility depending on the parameter regime.

The main point of this example is to show that to understand a large complex system we have to keep both the parts and the whole in mind. In other words, we need both microscopic and macroscopic pictures. There are two horizons, the parts and the whole, the near and the far. There is a dialectic tension between these two horizons. It is not either/or but both/and.

I now illustrate how this type of tension matters in economics and sociology, and the implications for public policy. If you are (understandably) concerned about whether Ising models have anything to do with sociology and economics, see my earlier posts about these issues. The first post introduced discrete-choice models that are essentially Ising models. A second post discussed how these show how equilibrium may never be reached leading to the insight that local initiatives can "nucleate" desired outcomes. A third post, considered how heterogeneity can lead to qualitative changes including hysteresis so that the effectiveness of "nudges" can vary significantly.

A fundamental (and much debated) question in sociology is the relationship between individual agency and social structures. Which determines which? Do individuals make choices that then lead to particular social structures? Or do social structures constrain what choices individuals make. In sociology, this is referred to as the debate between voluntarism and determinism. A middle way, that does not preference agency or structure, is structuration, proposed by Anthony Giddens.

Social theorists who give primacy to social structures will naturally advocate solving social problems with large government schemes and policies that seek to change the structures. On the other side, those who give primacy to individual agency are sceptical of such approaches, and consider progress can only occur through individuals, and small units such as families and communities make better choices. The structure/agency divide naturally maps onto political divisions of left versus right, liberal versus conservative, and the extremes of communist and libertarian. An emergentist perspective is balanced, affirming the importance of both structure and agency.

Key concepts in economics are equilibrium, division of labour, price, and demand. These are the outcomes of many interacting agents (individuals, companies, institutions, and government). Economies tend to self-organise. This is the "invisible hand" of Adam Smith. Thus, emergence is one of the most important concepts in economics. 

A big question is how the equilibrium state and the values of the associated state variables (e.g., prices, demand, division of labour, and wealth distribution) emerge from the interactions of the agents. In other words, what is the relationship between microeconomics and macroeconomics?

What are the implications for public policy? What will lead to the best outcomes (usually assumed to be economic growth and prosperity for "all")? Central planning (or at least some government regulation) is pitted against laissez-faire. For reasons, similar to the Ising and sociology cases, an emergentist perspective is that the whole and the parts are inseparable. This is why there is no consensus on the answers to specific questions such as, can government stimulus spending move an economy out of a recession? Keynes claimed it could but the debate rages on.

An emergentist perspective tempers expectations about the impact of agency, both individuals and government. It is hard to predict how a complex system with emergent properties will respond to perturbations such as changes in government policy. This is the "law" of unintended consequences.

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.”

Friedrich A. HayekThe Fatal Conceit: The Errors of Socialism

I think this cuts both ways. This is also reason to be skeptical about those (such as Hayek's disciples) who think they can "design" a better society by just letting the market run free.

Thursday, February 16, 2023

The challenge of useful data in the social sciences

A major challenge for the social sciences is obtaining data that is reliable, gives significant insight, and could be used to test theories. Each week I read The Economist. Many of their articles feature graphs of social or economic data. To me, some of the graphs are just random noise or show marginal trends that I am not convinced are that significant. But other graphs are quite dramatic or insightful. Previously, I posted a famous one about smoking.

This week I saw the graph below in The New York Times, as part of a long article, Childbirth Is Deadlier for Black Families Even When They’re Rich, Expansive Study Finds, based on this preprint.


The data clearly shows the distressing fact that "The richest Black women have infant mortality rates at about the same level as the poorest white women."

Thursday, December 1, 2022

How can funders promote significant breakthroughs?

 Is real scientific progress slowing? Are funders of research, whether governments, corporations, or philanthropies, getting a good return on their investment? Along with many others (based largely on intuition and anecdote) I believe that the system is broken, and at many different levels. What are possible ways forward? How might current systems of funding be reformed?

The Economist recently published a fascinating column (in the Finance and Economics section!), How to escape scientific stagnation. It reviews a number of recent papers by economists that wrestle with questions such as those above.

Philanthropists... funding of basic research has nearly doubled in the past decade. All these efforts aim to help science get back its risk-loving mojo.

In a working paper published last year, Chiara Franzoni and Paula Stephan look at a number of measures of risk, based on analyses of text and the variability of citations. These suggest science’s reward structure discourages academics from taking chances.

Another approach in vogue is to fund “people not projects”. A study in 2011 compared researchers at the Howard Hughes Medical Institute, where they are granted considerable flexibility over their research agendas and lots of time to carry out investigations, with similarly accomplished ones funded by a standard NIH programme. The study found that researchers at the institute took more risks. As a result, they produced nearly twice as much highly cited work, as well as a third more “flops” (articles with fewer citations than their previously least-cited work). 

Despite the uncertainty about exactly how best to fund scientific research, economists are confident of two things. The first is that a one-size-fits-all approach is not the right answer,... DARPA models, the Howard Hughes Medical Institute’s curiosity-driven method, and even handing out grants by lottery, as the New Zealand Health Research Council has tried, all have their uses.

The second is that this burst of experimentation must continue. The boss of the NSF, Sethuraman Panchanathan, agrees. He is looking to reassess projects whose reviews are highly variable—a possible indication of unorthodoxy. He is also interested in a Willy Wonka-style funding mechanism called the “Golden Ticket”, which would allow a single reviewer to champion a project even if his or her peers do not agree.  ...many venture-capital partnerships employ similar policies, because they prioritise the upside of long-shot projects rather than seeking to minimise failure. 

The study that I would like to see done is along the following lines. Identify at what age and what type of institution and what type of funding environment, the biggest breakthroughs happen. I suggest that you will find in the U.S.A, that it was done by young faculty at the top 20 institutions in an era when they did not have to worry much about getting grants. If so, then I think most of the money should be given to them!

 

Friday, August 12, 2022

Sociological insights from statistical physics

Condensed matter physics and sociology are both about emergence. Phenomena in sociology that are intellectually fascinating and important for public policy often involve qualitative change, tipping points, and collective effects. One example is how social networks influence individual choices, such as whether or not to get vaccinated. In my previous post, I briefly introduced some Ising-type models that allow the investigation of fundamental questions in sociology. The main idea is to include heterogeneities and interactions in models of decision. 

What follows is drawn from Sections 2 and 3 of the following paper from the Journal of Statistical Physics. 

Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges by Jean-Philippe Bouchaud

Bouchaud first considers a homogeneous population which reaches an equilibrium state. This is then described by an Ising model with an interaction (between agents) J, in an external field, F that describes the incentive for the agents to make one of the choices. The state of the model (in the mean-field approximation) is then found by solving the Curie-Weiss equation. In the sociological context, this was first derived by Weidlich and in the economic context re-derived by Brock and Durlauf.  (Aside: The latter paper is in one of the "top-five" economic journals, was published five years after submission, and has been cited more than 2000 times.)

As first noted by Weidlich, a spontaneous “polarization” of the population occurs in the low noise regime β>β c , i.e. [the average equilibrium value of S_z] ϕ ∗≠1/2 even in the absence of any individually preferred choice (i.e. F=0). When F≠0, one of the two equilibria is exponentially more probable than the other, and in principle the population should be locked into the most likely one: ϕ ∗>1/2 whenever F>0 and ϕ ∗<1/2 whenever F<0.

Unfortunately, the equilibrium analysis is not sufficient to draw such an optimistic conclusion. A more detailed analysis of the dynamics is needed, which reveals that the time needed to reach equilibrium is exponentially large in the number of agents, and as noted by Keynes, "in the long run, we are all dead." This situation is well-known to physicists, but is perhaps not so well appreciated in other circles—for example, it is not discussed by Brock and Durlauf.

Bouchaud then discusses the meta-stability associated with the two possible polarisations, as occurs in a first-order phase transition. From a non-equilibrium dynamical analysis, based on a Langevin equation, 

one finds that the time τ needed for the system, starting around ϕ=0, to reach ϕ ∗≈1 is given by: 𝜏 ∝ exp[𝐴𝑁(1−𝐹/𝐽)], where A is a numerical factor. This means that whenever 0<F<J, the system should really be in the socially good minimum ϕ ∗≈1, but the time to reach it is exponentially large in the population size.  The important point about this formula is the presence of the factor N(1−F/J) in the exponential.

In other words, it has no chance of ever getting there on its own for large populations. Only when F reaches J, i.e. when the adoption cost C becomes zero will the population be convinced to shift to the socially optimal equilibrium...

This is very different from the standard model of innovation diffusion, based on a simple differential equation proposed by Bass in 1969 [cited more than 10,000 times].

In physics, the existence of mutually inaccessible minima with different potentials is a pathology of mean-field models that disappears when the interaction is short-ranged. In this case, the transition proceeds through “nucleation”, i.e. droplets of the good minimum appear in space and then grow by flipping spins at the boundaries. 

This suggests an interesting policy solution when social pressure resists the adoption of a beneficial practice or product: subsidize the cost locally, or make the change compulsory there, so that adoption takes place in localized spots from which it will invade the whole population. The very same social pressure that was preventing the change will make it happen as soon as it is initiated somewhere.

This analysis provides concepts to understand wicked problems. Societies get "trapped" in situations that are not for the common good and outside interventions, such as providing incentives for individuals to make better choices, have little impact.

In the next post, I hope to discuss the role of heterogeneity (i.e. the role of a random field in the Ising model). A seminal paper published in the American Journal of Sociology in 1978 is Threshold models of collective behavior  by Mark Granovetter. It has been cited more than 6000 times. The central idea is how changes in heterogeneity can induce a transition between two different collective states.

Aside: The famous Keynes quote was in his 1923 publication, The Tract on Monetary Reform. The fuller quote is “But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us, that when the storm is long past, the ocean is flat again.”

Wednesday, August 3, 2022

Models for collective social phenomena

World news is full of dramatic and unexpected events in politics and economics, from stock market crashes to the rapid rise of extreme political parties. Trust in an institution can evaporate overnight.

The world is plagued by "wicked problems" (corruption, belief in conspiracy theories, poverty, ...) that resist a solution even when considerable resources (money, personnel, expertise, government policy, incentives, social activism) are devoted to addressing the problem. 

Here I introduce some ideas and models that are helpful for efforts to understand these emergent phenomena. Besides rapid change and discontinuities, other relevant properties include herding, trending, tipping points, and resilient equilibria. Some cultural traits or habits are incredibly persistent, even when they are damaging to a community. 

I now consider some key elements for minimal models of these phenomena: discrete choices, utility, incentives, noise, social interactions, and heterogeneity.

Discrete choices

The system consists of N agents {i} who make individual choices. Examples of binary choices are whether or not to buy a particular product, vote for a political candidate, believe a conspiracy theory, accept bribes, get vaccinated, or join a riot. For binary choices, the state of each agent is modelled by an "Ising spin", S_i = +1 or -1. 

Utility

This is the function each agent wants to maximise; what they think they will gain or lose by their decision. This could be happiness, health, ease of life, money, or pleasure.  The utility U_i will depend on the incentives provided to make a particular choice, the personal inclination of the agent, and possibly the state of other agents.

Personal inclination

Let f_i be a number representing the tendency for agent i to choose S_1=+1. 

Incentives

All individuals make their decision based on the incentives offered. Knowledge of incentives is informed by public information.  This incentive F(t) may change with time. For example, the price of a product may decrease due to an advance in technology or a government may run an advertising program for a public health initiative.

Noise

No agent has access to perfect information in order to make their decision. This uncertainty can be modelled by a parameter beta, which increases with decreasing noise. According to the log-it rule the probability that of a particular decision is

1/beta is the analogue of temperature in statistical mechanics and this probability function is the Fermi-Dirac probability distribution! 

Social interactions

No human is an island. Social pressure and imitation play a role in making choices. Even the most "independent-minded" individual makes decisions that are influenced somewhat by the decisions of others they interact with. These "neighbours" may be friends, newspaper columnists, relatives, advertisers, or participants in an internet forum. The utility for an individual may depend on the choices of others. The interaction parameter J_ij is the strength of the influence of agent j on agent i.

Heterogeneity

Everyone is different. People have different sensitivities to different incentives. This diversity reflects different personalities, values, and life circumstances. This heterogeneity can be modelled by assigning a probability distribution rho(f_i).

Putting all the ideas above together the utility function for agent i is the following.


This means that the minimal model to investigate is a Random Field Ising model. It exhibits rich phenomena, many of which are similar to the social phenomena that were mentioned at the beginning of the post. Later posts will explore this.

The discussion above is drawn from a nice paper published in the Journal of Statistical Physics in 2013.

Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges by Jean-Philippe Bouchaud.

Saturday, May 14, 2022

Emergence matters (in a nutshell)

Emergence is one of the most important concepts in the sciences: from physics to biology to sociology. Most of the big questions in science involve emergence. Yet there is no consensus about what emergence is, how to define it, or why it matters. This is my attempt to clarify some of the important issues and questions. For reasons of brevity, I give no references and only a few examples. They can come later. Here I am trying to take a path that is intermediate between the precision of philosophers and the looseness of condensed matter physicists' discussion of emergence. My goals are clarity and brevity.

Characteristics of emergent phenomena

Consider a system that is composed of many interacting parts. If the properties of the system are compared with the properties of the individual parts, a property of the whole system is an emergent property if it has the following characteristics.

1. Novelty 

An emergent property of the system is a property that is not present in the individual parts of the system.

2. Modification of parts 

An emergent property of the system is associated with a modification of the properties of and the relationships between the parts of the system. 

3. Universality

An emergent property is universal in the sense that it is independent of many of the details of the parts. As a consequence, there are many systems that can have the emergent property.

4. Irreducibility

An emergent property cannot be reduced to properties of the parts.

5. Limited predictability

An emergent property is difficult to predict solely from knowledge of the properties of the parts and how they interact with one another.

Here are a few issues to consider about the five characteristics above. 

First, “emergent property” could possibly be replaced with emergent phenomenon, object, or state.

Second, for each of the five characteristics is it necessary and/or sufficient for the system property to be emergent?

Third, one of the most contested characteristics concerns predictability. “Difficult to predict” is sometimes replaced with “impossible”, “almost impossible”, “extremely difficult”, or “possible in principle, but impossible in practice.” After an emergent property has been observed sometimes it can be understood in terms of the properties of the parts. An example is the BCS theory of superconductivity, which provided a posteriori, rather than a priori, understanding. A keyword in the statement above is “solely”.

Examples of properties of a system that are not emergent are volume, mass, charge, and number of atoms. These are additive properties. The property of the system is simply the sum of the properties of the parts.

Scales and hierarchies

Central to emergence is the idea of different scales. Emergent properties only occur when scales become larger. Scales that are simply defined, and might be called extrinsic, are the number of parts, length scale, and time scale. A more subtle scale, which might be called intrinsic, is a scale associated with the emergent property. This emergent scale is intermediate between that of the parts and that of the whole system.

Emergent scales lead naturally to hierarchies, such as those associated with different scientific disciplines, as shown below. Hierarchies also occur within individual disciplines.

At each level there are distinct phenomena, concepts, theories, and scientific methods.

Another important scale is that of complexity. Generally, as one goes up the hierarchy one says that the level of complexity increases. Giving a precise version of such statements is not simple.

Complexity

Simple rules can lead to complex behaviour. This is nicely illustrated by cellular automata. It is also seen in other systems with emergent properties. For example, the laws describing the properties of electrons and ions in a crystal or a large molecule are quite simple (Schrodinger’s equation plus Coulomb’s law). Yet from these simple rules, complex phenomena emerge: all of chemistry and condensed matter physics!

There is no agreed universal measure for the complexity of a system or with many components. One possibility is the Kolmogorov measure. Using such measures to elucidate emergence, such as how complexity changes with other scales, is an important challenge.

Other issues

There are a host of other issues and topics that enter discussions about emergence. Some of these are of a more philosophical nature. Here I just list them: robustness, quality vs. quantity, objective vs. subjective, universality vs. particularity, ontology vs. epistemology, discontinuities, incommensurability, theory reduction, asymptotic singularities, top-down causation, supervenience, differentiation and integration (not calculus) of system parts, reductionism, foundationalism, fundamentalism, strong versus weak emergence, and criteria for theory acceptance.

Discussion of some of these issues can be quite abstract but to make the discussion above more precise they may need to be considered. 

Emergence is relevant to practical matters such as scientific strategy, priorities, allocation of resources, and our dispositions as scientists. Too often views on these issues are implicit and not reflected upon. 

The practical matter of scientific strategy

When studying a system, the first choice that must be made is what scale or scales to focus on. For example, in materials science, the options range from the atomic scale to the macroscopic. This choice determines the tools and methods, both experimental and theoretical, that can be used to study the system. In different words, the scientist is making a choice of ontology: the object they choose to study. This then determines epistemology: the concepts, theories, and organising principles a scientist may use or hope to discover. Effective theories and toy models enter here. 

When systems have been studied by a range of methods and at a range of scales, a challenge is the synthesis of the results of these studies. Value-laden judgements are made about the priority, importance, and validity of such attempts at synthesis. Often synthesis is relegated to a few sentences in the introductions and conclusions of papers.

For known systems and emergent properties, there is the possibility of creating new methods and probes to investigate them at appropriate scales.

New systems can be created and investigated in the hope of discovering new emergent properties (e.g., new states of matter) or more modestly, that manifest an emergent property that is more amenable to scientific study or technological application.

As emergent properties involve multiple scales they are often of interest to and amenable to study by more than one scientific discipline. This creates opportunities and challenges for interdisciplinary collaboration.

Individual scientists must and do make decisions about the relative priority of the different strategies outlined above. Research groups, departments, institutions, professional societies, and funding agencies must and do also make decisions about such priorities. The decision outcomes are also emergent properties of a system with multiple scales from that of the individual scientist to global politics. I claim that too often these weighty decisions are made implicitly, rather than explicitly following debate and deliberation.

The disposition of the scientist

All scientists are human. In our professional life, we have hopes, aspirations, values, fears, attitudes, expectations, and prejudices. These are shaped by multiple influences from the personal to the cultural to the institutional. We should reflect on the past century of our study of emergent systems from physics to biology to sociology. If we honestly evaluate our successes and failures I think this may lead us to have certain dispositions that are interrelated.

Humility. There is so much we do not understand. Furthermore, we fail abjectly at predicting emergent properties. This is not surprising. Unpredictability is one of the characteristics of emergent properties. There is a hubris associated with grand initiatives such as “the theory of everything”, the Human Genome Project, “materials by design”, and macroeconomic modelling. 

Expect surprises. There are many exciting discoveries waiting. They will be found by curiosity and serendipity.

Wonder. Emergent phenomena are incredibly rich and beautiful to behold, from physics to biology to sociology. Furthermore, the past century has seen amazing levels of understanding. But this is a “big picture” and “coarse-grained” understanding, not the description that the reductionists lust for and claim possible. 

Realistic expectations. Given the considerations above I think we should have modest expectations of the levels of understanding possible, and what research programs, from that of individual scientists to billion-dollar initiatives, can achieve. We need to stop the hype. Modest expectations are particularly appropriate with respect to our ability to control emergent properties.

The holy grail

“The philosophers have only interpreted the world, in various ways. The point, however, is to change it.”

Karl Marx

Understanding complex systems with emergent properties is an ambitious scientific challenge. This enterprise has intrinsic intellectual merits. But a whole other dimension and challenge is to use this understanding to modify, manipulate, and control the properties of systems with emergent properties. This enticing prospect appeals to technologists, activists, and governments. Such promises feature prominently in grant applications, press releases, and reports from funding agencies. Diverse examples of this control goal include chemical modification of known superconductors to produce room-temperature superconductivity, drug design, social activism, the leadership of business corporations, and governments attempting to manage the economy. 

However, we should honestly reflect on decades of “scientifically informed” and “evidence-based” initiatives in materials science, medicine, poverty alleviation, government economic policy, business management, and political activism. Unfortunately, the fruit from these initiatives is disappointing, particularly compared to what has often been promised.

My goal is not to promote despair but rather to prevent it.  With more realistic expectations, based on reality rather than fantasy, we are more likely to make significant progress in finding ways to make some progress (albeit modest but worthwhile) in learning how to manipulate these complex systems.

This post contains many claims that require discussion, refinement or abandonment. I welcome suggestions on how to improve these ideas.

Thursday, April 14, 2022

Elite imitation and flailing universities

The mission of universities is thinking: teaching students to think and enabling scholars to think about the world we live in. Yet, it is debatable whether most universities in the world achieve these goals. Arguably, things are getting worse. Universities are flailing. Why?

Most universities desperately want to be elite. They want to be like Harvard, Caltech, Oxford, Princeton, Berkeley, Stanford, ...
But non-elite universities do not have the necessary resources to be elite. Yet they are controlled by elites (management on high salaries, faculty educated at elite universities) who want to be elite and so settle for elite imitation.

"A flailing university is what happens when the principal cannot control its agents. The flailing university cannot implement its own plans and may have its plans actively subverted when its agents work at cross-purposes. The non-elite university flails because it is simultaneously too large and too small: too large because the non-elite university attempts to legislate and regulate every aspect of the work of faculty and students and too small because it lacks the resources and personnel to achieve its ambitions. 

To explain the mismatch between the non-elite universities' ambitions and their abilities, consider the premature demands by elites in non-elite universities for goals, policies, curricula, infrastructure, and outcomes more appropriate to an elite university. 

In order to satisfy external actors (government, business, parents, ...) non-elite universities often take on tasks that overwhelm institutional capacity, leading to premature load bearing. As these authors put it, “By starting off with unrealistic expectations of the range, complexity, scale, and speed with which organizational capability can be built, external actors set both themselves and (more importantly) the students and researchers that they are attempting to assist to fail”. 

The expectations of external actors are only one source of imitation, however. Who people read, listen to, admire, learn from, and wish to emulate is also key. Another factor driving inappropriate imitation is that the elites in non-elite universities—senior management and high-profile faculty—are closely connected with business elites and elite universities, usually more closely than they are to the students and faculty at their own university. As a result, this elite initiates and supports policies that appear to it to be normal even though such policies may have little relevance to the student and faculty as a whole and may be wildly at odds with the university capacity. This kind of mimicry of what appear to be the best elite university policies and practices is not necessarily ill intentioned. It is simply one by-product of the background within which the elites operates. University managers engage with business elites and managers at other non-elite universities."

I actually did not write most of the text above, I just took the text from the first two pages of the article below and replaced some words (e.g. Indian state with non-elite university, Indian citizens with students and faculty). 

Premature Imitation and India’s Flailing State 
Shruti Rajagopalan, Alexander T. Tabarrok

I came across the article after listening to a podcast episode that interviews the two authors, recommended by my son.

I also recommend Shutri's own podcast, Ideas of India, including a recent episode, Where did development economics go wrong?


What do you think? Are universities like a flailing state? Is the problem elite imitation?

Saturday, November 13, 2021

If organisations are emergent can they be managed?

 Any organisation is composed of many interacting parts. For example, a university is not just composed of staff and students, but also includes collaborators, donors, employers, suppliers, parents, graduates, and trustees. Their interactions with one another are influenced by structures, such as buildings, committees, and government policy. Furthermore, a university exists in a context: political, economic, historical, and cultural. What emerges from the interactions of all these components may be new states, for good or for ill. Like all emergent phenomena these states are hard to predict. For example, what will lead to high-quality education or a diverse student body? Can desirable outcomes be managed? What is the role of leadership in large organisations? Are there some universal principles of management that are useful for a wide range of organisations, whether corporations, NGOs, universities, or government departments?

Researching, teaching, and writing about "Organisational Development" and "management" is a massive industry; from Business schools in universities to a multitude of popular books for sale in airports. A fascinating paper is

The Dialogic Mindset: Leading Emergent Change in a Complex World by Gervase Bushe and Robert Marshak.

It questions the paradigm of the "visionary leader", "command and control", and the "performance mindset" that focuses on instrumental and measurable goal setting and achievement.

To understand the limitations of this management paradigm I find it helpful to reflect on the history and context of how it emerged (!) in the USA after World War II. After the war, veterans who returned to civilian life had experienced a particular leadership and organisational culture of the military: hierarchy, authority, process, discipline, solidarity, male, mono-cultural, ...  And, it worked in the context of war!

Many war veterans, both junior and senior, took this approach and mentality into industry, and it worked well in the American post-war economic boom of assembly-line-based large-scale manufacturing. The automotive industry, centred around Detroit, was representative. Arguably, the success was based on efficiency not innovation, limited competition in a simple market, and a homogeneous workforce. Two important figures who emerged from this Detroit era were Peter Drucker and Robert McNamara. Drucker did a seminal two-year study of General Motors, during WWII, that started his trajectory towards becoming the doyen of management studies. McNamara took his strategic planning experience in the war, and applied it successfully at Ford for 15 years, rising to become President of Ford in 1960. He then became Secretary of Defense for JFK and used the same management approach for the USA's involvement in the Vietnam war. This was an unmitigated disaster, but that did not stop him from using a similar approach when President of the World Bank.

Back to Bushe and Marshak and today's world. They claim that

The “visionary leader” narrative and performance mindset that predominate in theories and practices of “Change Leadership” are no longer effective in an environment of multi-dimensional diversity marked by volatility, uncertainty, complexity, and ambiguity.

The prevailing narrative of leadership is based on the assumption that great leaders must [be strategic thinkers], have a vision, and the ability to lead followers to that vision. Leaders, followers, and commentators alike assume that being a visionary is indispensable to organizational leadership.

... a leading voice supporting an alternative paradigm is Heifetz’s (1998) leadership model that indirectly challenges the heroic, visionary orthodoxy. He divides the decision situations leaders face into technical problems, which can be defined and solved through a top-down imposition of technical rationality; and adaptive challenges, which can only be “solved” through the voluntary engagement of the people who will have to change what they do and how they think. 
In Heifetz’s alternative narrative of leadership, adaptive leaders identify challenges but instead of providing solutions, they encourage employees and other stakeholders to propose and act on their own solutions.

 A nice example is how employees shaped strategy at the New York Public Library. 

The problem with the standard narrative is that it overlooks that organisations are emergent entities where cause-effect relations are not understood and outcomes are hard to predict. This challenge is exacerbated today by the fact that any organisation is not an isolated entity but is immersed in a complex and rapidly changing environment. This puts a premium on innovation and adaptability. 

Future posts will explore what this might mean in practice. Can self-organising processes and emergence achieve desired outcomes by "changing the conversation"?

The role of superconductivity in development of the Standard Model

In 1986, Steven Weinberg published an article,  Superconductivity for Particular Theorists , in which he stated "No one did more than N...