Two of the approaches to the theoretical description of systems with emergent properties that have been fruitful are effective theories and toy models. These leverage our limited knowledge of many details about a system with many interacting components.

**Effective theories**

An effective theory is valid at a particular range of scales. This exploits the fact that in complex systems there is often a hierarchy of scales (length, energy, time, or number). In physics, examples of effective theories include classical mechanics, general relativity, classical electromagnetism, and thermodynamics. The equations of an effective theory can be written down almost solely from consideration of symmetry and conservation laws. Examples include the Navier-Stokes equations for fluid dynamics and non-linear sigma models in elementary particle physics. Some effective theories can be derived by the “coarse-graining” of theories that are valid at a finer scale. For example, the equations of classical mechanics result from taking the limit of Planck’s constant going to zero in the equations of quantum mechanics. The Ginzburg-Landau theory for superconductivity can be derived from the BCS theory. The parameters in effective theories may be determined from more microscopic theories or from fitting experimental data to the predictions of the theory. For example, transport coefficients such as conductivities can be calculated from a microscopic theory using a Kubo formula.

Effective theories are useful and powerful because of the minimal assumptions and parameters used in their construction. For the theory to be useful it is *not *necessary to be able to derive the effective theory from a smaller scale theory, or even to have such a smaller scale theory. For example, even though there is no accepted quantum theory of gravity, general relativity can be used to describe phenomena in astrophysics and cosmology and is accepted to be valid on the macroscopic scale. Some physicists and philosophers may consider smaller-scale theories as more fundamental, but that is contested and so I will not use that language. There also are debates about how effective field theories fit into the philosophy of science.

**Toy models**

In his 2016 Nobel Lecture, Duncan Haldane said, “Looking back, … I am struck by how important the use of stripped down “toy models” has been in discovering new physics.”

Here I am concerned with a class of theoretical models that includes the Ising, Hubbard, Agent-Based Models, NK, Schelling, and Sherrington-Kirkpatrick models. I refer to them as “toy” models because they aim to be as simple as possible, while still capturing the essential details of a particular emergent phenomenon. At the scale of interest, the model is an approximation, neglecting certain degrees of freedom and interactions. In contrast, at the relevant scale, effective theories are often considered to be exact because they are based on general principles.

Historical experience has shown that there is a strong justification for the proposal and study of toy models. They are concerned with a qualitative, rather than a quantitative, description of experimental data. A toy model is usually introduced to answer basic questions about **what is possible.** What are the essential ingredients that are sufficient for an emergent phenomena to occur? What details do matter? For example, the Ising model was introduced in 1920 to see if it was possible for statistical mechanics to describe the sharp phase transition associated with ferromagnetism.

In his book *The Model Thinker *and online course Model Thinking, Scott Page has enumerated the value of simple models in the social sciences. An earlier argument for their value in biology was put by JBS Haldane in his seminal article about “bean bag” genetics. Simplicity makes toy models more tractable for mathematical analysis and/or computer simulation. The assumptions made in defining the model can be clearly stated. If the model is tractable then the pure logic associated with mathematical analysis leads to reliable conclusions. This contrasts with the qualitative arguments often used in the biological and social sciences to propose explanations. Such arguments can miss the counter-intuitive conclusions associated with emergent phenomena and the rigorous analysis of toy models. Such models can show what is possible, what are simple ingredients for a system *sufficient *to exhibit an emergent property, and how a quantitative change can lead to a qualitative change. In different words, what details do matter?

Toy models can guide what experimental data to gather and how to analyse it. Insight can be gained by considering multiple models as that approach can be used to rule out alternative hypotheses. Finally, there is value in the adage, “all models are wrong, but some are useful.”

Due to universality, sometimes toy models work better than expected, and can even give a quantitative description of experimental data. An example is the three-dimensional Ising model, which was eventually found to be consistent with data on the liquid-gas transition near the critical point. Although, not a magnetic system, the analogy was bolstered by the mapping of the Ising model onto the lattice gas model. This success led to a shift in the attitude of physicists towards the Ising model. According to Martin Niss, from 1920-1950, it was viewed as irrelevant to magnetism because it did not describe magnetic interactions quantum mechanically. This was replaced with the view that it was a model that could give insights into collective phenomena. From 1950-1965, the view diminished that the Ising model was irrelevant to describing critical phenomena because it oversimplified the microscopic interactions.

Physicists are particularly good and experienced at the proposal and analysis of toy models. I think this expertise is a niche that they could exploit more in contributing to other fields, from biology to the social sciences. They just need humility to listen to non-physicists about what the important questions and essential details are.