Monday, June 26, 2023

What is really fundamental in science?

What do we mean when we say something in science is fundamental? When is an entity or a theory more fundamental or less fundamental than something else? For example, are quarks and leptons more fundamental than atoms? Is statistical mechanics more fundamental than thermodynamics? Is physics more fundamental than chemistry or biology? In a fractional quantum Hall state, are electrons or the fractionally charged quasiparticles more fundamental?

Answers depend on who you ask. Physicists such as Phil Anderson, Steven Weinberg, Bob Laughlin, Richard Feynman, Frank Wilczek, and Albert Einstein have different views.

In 2017-8, the Foundational Questions Institute (FQXi) held an essay contest to address the question, “What is Fundamental?” Of the 200 entries, 15 prize-winning essays have been published in a single volume. The editors give a nice overview in the Introduction.

This post is mostly about the essay, Fundamental? of the first prize winner, Emily Adlam, a philosopher of physics. She contrasts two provocative statements.

Fundamental means we have won. The job is done and we can all go home.

Fundamental means we have lost. Fundamental is an admission of defeat.

This raises the question of whether being fundamental is objective or subjective.

Examples are given from scientific history to argue that what is considered to be fundamental has changed with time. The reductionism has led to the drive to explain everything in terms of smaller and smaller entities, that are deemed 'more fundamental". But we find that smaller does not always mean simpler.

Perhaps we should ask what needs explaining and what constitutes a scientific explanation. For example, Adlam asks whether explaining the fact that the initial state of the universe had a low entropy [the "past hypothesis"] is really possible or should be an important goal.

She draws on the issue of the distinction between objective and subjective probabilities. Probabilities in statistical mechanics are subjective: they are a statement about our own ignorance about the details of the motion of individual atoms and not any underlying randomness in nature. In contrast, probabilities in quantum theory reflect objective chance.

as realists about science we must surely maintain that there is a need for science to explain the existence of the sorts of regularities that allow us to make reliable predictions... but there is no similarly pressing need to explain why these regularities take some particular form rather than another. Yet our paradigmatic mechanical explanations do not seem to be capable of explaining the regularity without also explaining the form, and so increasingly in modern physics we find ourselves unable to explain either. 

It is in this context that we naturally turn to objective chance. The claim that quantum particles just have some sort of fundamental inbuilt tendency to turn out to be spin up on some proportion of measurements and spin down on some proportion of measurements does indeed look like an attempt to explain a regularity (the fact that measurements on quantum particles exhibit predictable statistics) without explaining the specific form (the particular sequence of results obtained in any given set of experiments). But given the problematic status of objective chance, this sort of nonexplanation is not really much better than simply refraining from explanation at all. 

Why is it that objective chances seem to be the only thing we have in our arsenal when it comes to explaining regularities without explaining their specific form? It seems likely that part of the problem is the reductionism that still dominates the thinking of most of those who consider themselves realists about science

In summary, (according to the Editors) Adlam argues that "science should be able to explain the existence of the sorts of regularities that allow us to make reliable predictions. But this does not necessarily mean that it must also explain why these regularities take some particular form." 

we are in dire need of another paradigm shift. And this time, instead of simply changing our attitudes about what sorts of things require explanation, we may have to change our attitudes about what counts as an explanation in the first place. 

Here, she is arguing that what is fundamental is subjective, being a matter of values and taste.

In our standard scientific thinking the fundamental is elided with ultimate truth: getting to grips with the fundamental is the promised land, the endgame of science. 

She then raises questions about the vision and hopes of scientific reductionists. 

In this spirit, the original hope of the reductionists was that things would get simpler as we got further down, and eventually we would be left with an ontology so simple that it would seem reasonable to regard this ontology as truly fundamental and to demand no further explanation. 

But the reductionist vision seems increasingly to have failed. 

When we theorise beyond the standard model [BSM] we usually find it necessary to expand the ontology still more: witness the extra dimensions required to make string theory mathematically consistent.

It is not just strings. Peter Woit has emphasised how BSM theories, such as supersymmetry, introduce many more particles and parameters.

... the messiness deep down is a sign that the universe works not ‘bottom-up’ but rather ‘top-down,’ ... in many cases, things get simpler as we go further up.

Our best current theories are renormalisable, meaning that many different possible variants on the underlying microscopic physics all give rise to the same macroscopic physical theory, known as an infrared fixed point. This is usually glossed as providing an explanation of why it is that we can do sensible macroscopic physics even without having detailed knowledge of the underlying microscopic theories. 

For example, elasticity theory, thermodynamics and fluid dynamics all work without knowing anything about atoms, statistical mechanics, and quantum theory.

But one might argue that this is getting things the wrong way round: the laws of nature don’t start with little pieces and build the universe from the bottom up, rather they apply simple macroscopic constraints to the universe as a whole and work out what needs to happen on a more fine-grained level in order to satisfy these constraints.

This is rather reminiscent of Laughlin's views about what is fundamental.

Finally, I mention two other essays that I look forward to reading as I think they make particularly pertinent points.

Marc Séguin (Chap. 6) distinguishes "between epistemological fundamentality (the fundamentality of our scientific theories) and ontological fundamentality (the fundamentality of the world itself, irrespective of our description of it)."

"In Chap. 12, Gregory Derry argues that a fundamental explanatory structure should have four key attributes: irreducibility, generality, commensurability, and fertility."

[Quotes are from the Introduction by the Editors].

Some would argue that the Standard Model is fundamental, at least on some level. But it involves 19 parameters that have to be fixed from experiment. Related questions about the Fundamental Constants, have been explored in a 2007 paper by Frank Wilczek.

Again, I thank Peter Evans for bringing this volume to my attention.

Saturday, June 17, 2023

Why do deep learning algorithms work so well?

I am interested in analogues between cognitive science and artificial intelligence. Emergent phenomena occur in both, there have been some fruitful cross-fertilisation of ideas, and the extent of the analogues is relevant to debates on fundamental questions concerning human consciousness.

Given my general ignorance and confusion on some of the basics of neural networks, AI, and deep learning, I am looking for useful and understandable resources.

Related questions are explored in a nice informative article from 2017 in Quanta magazine, New Theory Cracks Open the Black Box of Deep Learning by Natalie Wolchover.

Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” 

After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” 

Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

The article describes work by Naftali Tishby and collaborators that provides some insight into why deep learning methods work so well. This was first described in purely theoretical terms in a 2000 preprint

The information bottleneck method, Naftali Tishby, Fernando C. Pereira, William Bialek 

The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.

Tishby was stimulated in new directions in

2014 after reading a surprising paper by the physicists David Schwab and Pankaj Mehta

 An exact mapping between the Variational Renormalization Group and Deep Learning 

[They] discovered that a deep-learning algorithm invented by Geoffrey Hinton called the “deep belief net” works, in a particular case, exactly like renormalization [group methods in statistical physics... When they]. applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. 

Although this connection was a valuable new insight, the specific case of a scale-free system, is not relevant to many deep learning situations.

Tishby and Ravid Shwartz-Ziv discovered that 

Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a 1.

...layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label...

...deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.

What these new discoveries teach us about the relationship between learning in humans and in machines is contentious and explored briefly in the article. Although neural nets were inspired by the structure of the human brain the connection with the neural nets used today is tenuous.

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility.

Wednesday, June 14, 2023

Demonstrating polymer entanglement

From Steve Spangler I learnt this "party trick" demonstration of how the polymer molecules (polyethylene) in a plastic bag are entangled with one another. 

I was not sure that it would work as easily as it did for him. But it did!




Tuesday, June 6, 2023

Condensed Matter Physics: A Very Short Introduction out now!

Hard copies of my book can now be purchased directly from Oxford University Press. 


After a long wait and a lot of work, it was great to finally see it in print. I am very happy with the quality of the typesetting and the figures.

I look forward to getting feedback from readers.

From Leo Szilard to the Tasmanian wilderness

Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...