Monday, June 26, 2023

What is really fundamental in science?

What do we mean when we say something in science is fundamental? When is an entity or a theory more fundamental or less fundamental than something else? For example, are quarks and leptons more fundamental than atoms? Is statistical mechanics more fundamental than thermodynamics? Is physics more fundamental than chemistry or biology? In a fractional quantum Hall state, are electrons or the fractionally charged quasiparticles more fundamental?

Answers depend on who you ask. Physicists such as Phil Anderson, Steven Weinberg, Bob Laughlin, Richard Feynman, Frank Wilczek, and Albert Einstein have different views.

In 2017-8, the Foundational Questions Institute (FQXi) held an essay contest to address the question, “What is Fundamental?” Of the 200 entries, 15 prize-winning essays have been published in a single volume. The editors give a nice overview in the Introduction.

This post is mostly about the essay, Fundamental? of the first prize winner, Emily Adlam, a philosopher of physics. She contrasts two provocative statements.

Fundamental means we have won. The job is done and we can all go home.

Fundamental means we have lost. Fundamental is an admission of defeat.

This raises the question of whether being fundamental is objective or subjective.

Examples are given from scientific history to argue that what is considered to be fundamental has changed with time. The reductionism has led to the drive to explain everything in terms of smaller and smaller entities, that are deemed 'more fundamental". But we find that smaller does not always mean simpler.

Perhaps we should ask what needs explaining and what constitutes a scientific explanation. For example, Adlam asks whether explaining the fact that the initial state of the universe had a low entropy [the "past hypothesis"] is really possible or should be an important goal.

She draws on the issue of the distinction between objective and subjective probabilities. Probabilities in statistical mechanics are subjective: they are a statement about our own ignorance about the details of the motion of individual atoms and not any underlying randomness in nature. In contrast, probabilities in quantum theory reflect objective chance.

as realists about science we must surely maintain that there is a need for science to explain the existence of the sorts of regularities that allow us to make reliable predictions... but there is no similarly pressing need to explain why these regularities take some particular form rather than another. Yet our paradigmatic mechanical explanations do not seem to be capable of explaining the regularity without also explaining the form, and so increasingly in modern physics we find ourselves unable to explain either. 

It is in this context that we naturally turn to objective chance. The claim that quantum particles just have some sort of fundamental inbuilt tendency to turn out to be spin up on some proportion of measurements and spin down on some proportion of measurements does indeed look like an attempt to explain a regularity (the fact that measurements on quantum particles exhibit predictable statistics) without explaining the specific form (the particular sequence of results obtained in any given set of experiments). But given the problematic status of objective chance, this sort of nonexplanation is not really much better than simply refraining from explanation at all. 

Why is it that objective chances seem to be the only thing we have in our arsenal when it comes to explaining regularities without explaining their specific form? It seems likely that part of the problem is the reductionism that still dominates the thinking of most of those who consider themselves realists about science

In summary, (according to the Editors) Adlam argues that "science should be able to explain the existence of the sorts of regularities that allow us to make reliable predictions. But this does not necessarily mean that it must also explain why these regularities take some particular form." 

we are in dire need of another paradigm shift. And this time, instead of simply changing our attitudes about what sorts of things require explanation, we may have to change our attitudes about what counts as an explanation in the first place. 

Here, she is arguing that what is fundamental is subjective, being a matter of values and taste.

In our standard scientific thinking the fundamental is elided with ultimate truth: getting to grips with the fundamental is the promised land, the endgame of science. 

She then raises questions about the vision and hopes of scientific reductionists. 

In this spirit, the original hope of the reductionists was that things would get simpler as we got further down, and eventually we would be left with an ontology so simple that it would seem reasonable to regard this ontology as truly fundamental and to demand no further explanation. 

But the reductionist vision seems increasingly to have failed. 

When we theorise beyond the standard model [BSM] we usually find it necessary to expand the ontology still more: witness the extra dimensions required to make string theory mathematically consistent.

It is not just strings. Peter Woit has emphasised how BSM theories, such as supersymmetry, introduce many more particles and parameters.

... the messiness deep down is a sign that the universe works not ‘bottom-up’ but rather ‘top-down,’ ... in many cases, things get simpler as we go further up.

Our best current theories are renormalisable, meaning that many different possible variants on the underlying microscopic physics all give rise to the same macroscopic physical theory, known as an infrared fixed point. This is usually glossed as providing an explanation of why it is that we can do sensible macroscopic physics even without having detailed knowledge of the underlying microscopic theories. 

For example, elasticity theory, thermodynamics and fluid dynamics all work without knowing anything about atoms, statistical mechanics, and quantum theory.

But one might argue that this is getting things the wrong way round: the laws of nature don’t start with little pieces and build the universe from the bottom up, rather they apply simple macroscopic constraints to the universe as a whole and work out what needs to happen on a more fine-grained level in order to satisfy these constraints.

This is rather reminiscent of Laughlin's views about what is fundamental.

Finally, I mention two other essays that I look forward to reading as I think they make particularly pertinent points.

Marc Séguin (Chap. 6) distinguishes "between epistemological fundamentality (the fundamentality of our scientific theories) and ontological fundamentality (the fundamentality of the world itself, irrespective of our description of it)."

"In Chap. 12, Gregory Derry argues that a fundamental explanatory structure should have four key attributes: irreducibility, generality, commensurability, and fertility."

[Quotes are from the Introduction by the Editors].

Some would argue that the Standard Model is fundamental, at least on some level. But it involves 19 parameters that have to be fixed from experiment. Related questions about the Fundamental Constants, have been explored in a 2007 paper by Frank Wilczek.

Again, I thank Peter Evans for bringing this volume to my attention.

2 comments:

  1. "But one might argue that this is getting things the wrong way round: ..."

    This would imply that a process going against principle is thwarted by a top-down correction. There is no evidence that such a thing ever happens.

    For example, in the Gunter Nimtz experiment, there was nothing that stopped the music from being transmitted.

    ReplyDelete
  2. I was reading the book online, and an interesting thought struck me. They consider an infinite tower of theories at successively smaller scales. What if such a tower exists, but instead of a series of increasingly novel theories, there's a functor that maps each theory to its successor, which is the exact same functor at every level. Then there should be a way to formally sum this series. If there's a starting scale where the series begins, and the scale keeps getting smaller, then -- considering the way formal sums tend to work -- it's possible that the overall scale of the completed infinity would be larger than what you started with. I'm thinking that such a scale could be the "mesoscopic scale" postulated in collapse theories.

    ReplyDelete

Emergence and protein folding

Proteins are a distinct state of matter. Globular proteins are tightly packed with a density comparable to a crystal but without the spatia...