Showing posts with label big questions. Show all posts
Showing posts with label big questions. Show all posts

Tuesday, April 28, 2026

A mystery about science is that humans can do it

We are surrounded by scientific knowledge and have become so used to it that we often take science for granted. We may rarely reflect on the amazing revelations of science—and so miss the opportunity to recognize the awesome nature of the universe. Things that we know, learn, and do today in science would have been inconceivable decades, let alone centuries, ago. 

Einstein said, “The most incomprehensible thing about the universe is that it is comprehensible.”  For Einstein, the success of science was a wonderful mystery. As he wrote to his friend Maurice Solovine: 

. . . I consider the comprehensibility of the world (to the extent that we are authorized to speak of such a comprehensibility) as a miracle or as an eternal mystery. Well, a priori, one should expect a chaotic world, which cannot be grasped by the mind in any way . . . the kind of order created by Newton’s theory of gravitation, for example, is wholly different.  

There are several dimensions to the comprehensibility of the universe being mysterious. Einstein highlighted the first mystery, which is that there is order in the world, as reflected in scientific laws, such as Newton’s theory of gravity, and that this order can be succinctly stated in the language of mathematics. To the best of our knowledge, these laws hold for all time and everywhere in the universe. The existence of the orderly behaviour encoded in scientific laws is necessary for science to work, which leads to the second mystery. Why have we been able to discover these laws?

A second dimension that makes science possible is the intellectual abilities of humans. Humans not only have the rational ability to do science—to reason, to understand, to communicate—but also the ability to design instruments, such as telescopes and microscopes. There seems to be a connection between the rationality of the universe and human rationality. The idea that there may be harmony between the structures of the universe and those of the human mind has a long history.  In the Renaissance, it was encapsulated in the metaphor of the “music of the spheres”. In his book, Harmonies of the World (1619), Johannes Kepler connected music and his explanations of planetary orbits. Einstein said that “Mozart’s music is so pure and beautiful that I see it as a reflection of the inner beauty of the universe.” 

Humans might have been different. Suppose that the average human intelligence was lower than it is today, and the variation of human intelligence was smaller. Then, there might have been no Galileo, Isaac Newton, Robert Boyle, Charles Darwin, Albert Einstein, Richard Feynman, Phil Anderson, or Linus Pauling. Without these brilliant figures in scientific history, scientific progress would have been slow. 

The third dimension is that human language enables scientists to formulate, represent, and communicate ideas, theories, and the results of scientific experiments. This language sometimes involves mathematics, graphs, or tables of data. Scientists can understand one another. Even though there can be misunderstandings, these can be resolved. There is a scientific culture that transcends the diversity of cultures associated with different countries, linguistic groups, and ethnicities.

The fourth dimension is the physical dexterity of humans. I am a theoretical physicist not an experimental physicist. I am “all thumbs” and not particularly good in the lab. Consequently, I have done no laboratory work since I was a Ph.D. student. In contrast, some gifted scientists have an ability to do things in a laboratory that most people cannot. Their manual dexterity allows them to fabricate precision instruments, grow pure crystals, blow exquisite glassware, see faint images, and fine-tune electronic instruments in extraordinary ways. If some humans did not have such amazing abilities, scientific progress would have been much slower—or possibly non-existent.

A fifth dimension that makes science possible is the availability and processability of materials that have been central to scientific progress. Making instruments requires specific materials, such as metals, glass, rubber, insulators, plastics, and semiconductors. If we lived in a world where some of these materials were very rare or could not be processed to the purity or malleability required for scientific instruments, we would not have supercomputers, electron microscopes, or the James Webb Space Telescope today. We might be struggling to make even the simple telescopes used by Galileo.

These five dimensions are all required for humans to be able to do science. There are several additional mysteries of science.  These can be divided into two classes: what science can do and what we can learn about the universe from science. Science allows us to know certain things about reality (epistemology) and also to understand the nature of that reality (ontology). In other words, science helps us make maps of physical reality. The terrain represented by those maps is amazing. And the fact that we can make the maps is amazing.

Saturday, June 17, 2023

Why do deep learning algorithms work so well?

I am interested in analogues between cognitive science and artificial intelligence. Emergent phenomena occur in both, there have been some fruitful cross-fertilisation of ideas, and the extent of the analogues is relevant to debates on fundamental questions concerning human consciousness.

Given my general ignorance and confusion on some of the basics of neural networks, AI, and deep learning, I am looking for useful and understandable resources.

Related questions are explored in a nice informative article from 2017 in Quanta magazine, New Theory Cracks Open the Black Box of Deep Learning by Natalie Wolchover.

Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” 

After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” 

Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

The article describes work by Naftali Tishby and collaborators that provides some insight into why deep learning methods work so well. This was first described in purely theoretical terms in a 2000 preprint

The information bottleneck method, Naftali Tishby, Fernando C. Pereira, William Bialek 

The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.

Tishby was stimulated in new directions in

2014 after reading a surprising paper by the physicists David Schwab and Pankaj Mehta

 An exact mapping between the Variational Renormalization Group and Deep Learning 

[They] discovered that a deep-learning algorithm invented by Geoffrey Hinton called the “deep belief net” works, in a particular case, exactly like renormalization [group methods in statistical physics... When they]. applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. 

Although this connection was a valuable new insight, the specific case of a scale-free system, is not relevant to many deep learning situations.

Tishby and Ravid Shwartz-Ziv discovered that 

Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a 1.

...layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label...

...deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.

What these new discoveries teach us about the relationship between learning in humans and in machines is contentious and explored briefly in the article. Although neural nets were inspired by the structure of the human brain the connection with the neural nets used today is tenuous.

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility.

Friday, February 10, 2023

Different dimensions to emergence for specific scientific disciplines

Emergence is a concept relevant to a wide range of scientific disciplines, from physics to sociology. Emergence is also at the heart of some of the biggest questions and challenges in each discipline. How might I justify that claim? How do we move beyond "emergence" just being a trendy buzzword?

Here I suggest some different facets of a specific discipline that with an emergent perspective may help to understand the discipline and to plan scientific strategy. This post will be primarily descriptive and the next prescriptive. Later I will illustrate both aspects with specific disciplines. Although, some of the facets below may be somewhat obvious, others are profound. 

Presence of distinct scales. Scales may involve length, time, or number of components in a system of interest. Different phenomena are observed at different scales.

Stratification and separation of scales. Distinct phenomena as usually seen over some range of scale and a distinct stratum can be associated with that scale. 



The image is from here.

Sub-disciplines (or sub-fields) are associated with each stratum. The discipline can be viewed as stratified. For example, biology has sub-disciplines associated with ecosystems, organisms (animals and plants), organs, cells, genes, and molecules. This is nicely captured in a series of articles in The Economist.

The system can be viewed as interacting components. The system of interest is composed of many parts. Identifying the relevant components and their interactions may be non-trivial or at least was in the past. For example, consider the discovery of atoms in chemistry, quarks in nuclear physics, Cooper pairs in superconductivity, and DNA in genetics.

Emergent properties. Systems of interest have distinct properties that the components of the system do not. These properties may have certain characteristics such as universality, irreducibility, or unpredictability.

Emergent entities. These distinct entities can only be defined at certain scales and emerge from interactions between components that are defined at some smaller scale. In biology, emergent entities include organisms, organs, cells, genes, and proteins. In condensed matter physics emergent entities include quasiparticles and topological defects.

Emergent phenomena. This is closely related to emergent properties and may be redundant. But a property is something that a system has and a phenomenon is something that it does. 

Different experimental probes for different scales. For example, for condensed matter different types of electromagnetic radiation from x-rays to microwaves are used to investigate a material at different length scales. The nature of the instruments used and the type and quality of information gained can be quite different for the different scales.

Simple theoretical models of interacting components.  From the perspective of the smallest scales most systems with emergent properties are complex in that they involve many degrees of freedom and so large amounts of information and parameters are required to define the state of the system. The system may also be complex in the sense that the emergent properties are non-trivial and hard to describe theoretically. But with insight simple models with just a few parameters and state variables can exhibit and describe the emergent properties. Examples of such models in condensed matter physics include Ising, Hubbard, and non-linear sigma models. Examples from sociology include agent-based models such as the Schelling model for racial segregation. Simple models can be viewed as effective theories, valid at a particular scale, and can illustrate universality.

Organising principles and concepts at each scale. The principles and concepts are only meaningful and relevant at a particular scale. An example from condensed matter physics and elementary particle physics is spontaneous symmetry breaking.

In another post, I will discuss how an emergentist perspective plays out in scientific strategy.

Wednesday, January 18, 2023

Some amazing things about the universe that make science possible

 This post takes off from the following Einstein quotes.

"The most incomprehensible thing about the universe is that it is comprehensible"

from "Physics and Reality"(1936), in Ideas and Opinions, trans. Sonja Bargmann (New York: Bonanza, 1954), p292.

"...I consider the comprehensibility of the world (to the extent that we are authorized to speak of such a comprehensibility) as a miracle or as an eternal mystery. Well, a priori, one should expect a chaotic world, which cannot be grasped by the mind in any way .. the kind of order created by Newton's theory of gravitation, for example, is wholly different." 

Letters to Solovine, New York, Philosophical Library, 1987, p 131.

There are several dimensions to the comprehensibility of the universe. The dimension highlighted by Einstein is that there is order in the world, reflected in laws that can be succinctly stated and mathematically encoded. These laws seem to hold for all time and everywhere in the universe. Here I suggest there are three other dimensions that make science possible. 

A second amazing dimension is that humans have the rational ability to do science: to reason, to understand, to communicate, and to make instruments such as telescopes and microscopes. There seems to be somewhat of a match between the rationality of the universe and human rationality. This is written in the spirit of arguments about fine-tuning, where one imagines alternative universes.

Humans could have been different. Suppose that the amount and variation of human intelligence (at least that aspect of intelligence relevant to doing science) were different, and the mean and standard deviation were lower. Suppose that intelligence was lower so that there were no brilliant humans like Darwin, Einstein, Newton, Pauling, ... In fact, suppose that even the brightest people were as good at science as I am at music and dancing. Scientific progress would be rather limited.

But it is not just human intelligence that matters. A third amazing dimension is that of manual dexterity. I am "all thumbs" and not particularly good in the lab. There are some gifted experimentalists with an outstanding ability to do things most people cannot, even with training. Such abilities allow them to fabricate precision instruments, grow crystals, see faint images, ... If some humans did not have such abilities scientific progress would have been much slower, or possibly non-existent.

A fourth crucial dimension concerns the availability and processability of certain materials that are central to scientific progress. Making instruments requires particular materials such as metals, glass, and semiconductors. Suppose we lived in a world where some of these were very rare or just could not be processed to the purity or malleability required.

Friday, April 8, 2022

Why is there so much symmetry in biological systems?

 One of the biggest questions in biology is, What is the relationship between genotypes and phenotypes? In different words, how does a specific gene (DNA sequence) encode information that allows a very specific biological structure with a unique function to emerge?

Like big questions in many fields, this is a question about emergence.

In biology, this mapping from genotype to phenotype occurs at many levels from protein structure to human personality. An example is how the RNA encodes the structure of a SARS-CoV2 virion.

A fascinating thing about biological structures is that many have a certain amount of symmetry. The human body has reflection symmetry and many virions have icosahedral symmetry. What is the origin of this tendency to symmetry? Could evolution produce it?

Scientists will sometimes make statements such as the following about evolution.

Symmetric structures preferentially arise not just due to natural selection but also because they require less specific information to encode and are therefore much more likely to appear as phenotypic variation through random mutations.

How do we know this is true? Can such a statement be falsified? Or at least, can we produce concrete models or biological systems that are consistent with this statement?

There is a fascinating paper in PNAS that addresses the questions above.

Symmetry and simplicity spontaneously emerge from the algorithmic nature of evolution 
Iain G. Johnston, Kamaludin Dingle, Sam F. Greenbury, Chico Q. Camargo, Jonathan P. K. Doye, Sebastian E. Ahnert, and Ard A. Louis 

Here are a few highlights from the article. First, how one gets specific about information content and algorithms.
Genetic mutations are random in the sense that they occur independently of the phenotypic variation they produce. This does not, however, mean that the probability P(p) that a Genotype-Phenotype [GP] map produces a phenotype p upon random sampling of genotypes will be anything like a uniformly random distribution. 
Instead, ... arguments based on the coding theorem of algorithmic information theory (AIT) (7) predict that the P(p) of many GP maps should be highly biased toward phenotypes with low Kolmogorov complexity K(p) (8). 
High symmetry can, in turn, be linked to low K(p) (6911). An intuitive explanation for this algorithmic bias toward symmetry proceeds in two steps: 
1) Symmetric phenotypes typically need less information to encode algorithmically, due to repetition of subunits. This higher compressibility reduces constraints on genotypes, implying that more genotypes will map to simpler, more symmetric phenotypes than to more complex asymmetric ones (23). 
2) Upon random mutations these symmetric phenotypes are much more likely to arise as potential variation (1213), so that a strong bias toward symmetry may emerge even without natural selection for symmetry.
The authors consider several concrete models and biological systems that illustrate this bias toward symmetry. The first involves the structure of protein complexes, as given in the Protein Data Base (PDB).


A) Protein complexes self-assemble from individual units. 

(B) Frequency of 6-mer protein complex topologies found in the PDB versus the number of interface types, a measure of complexity 
K˜(p). 
Symmetry groups are in standard Schoenflies notation: C6D3C3C2, and C1. There is a strong preference for low-complexity/high-symmetry structures. 

(C) Histograms of scaled frequencies of symmetries for 6-mer topologies found in the PDB (dark red) versus the frequencies by symmetry of the morphospace of all possible 6-mers illustrate that symmetric structures are hugely overrepresented in the PDB database. 

Note the logarithmic scales for the probabilities (frequencies), meaning that the probabilities span four orders of magnitude. The authors claim that "many genotype–phenotype maps are exponentially biased toward phenotypes with low descriptional complexity. "
This intuition that simpler outputs are more likely to appear upon random inputs into a computer programming language can be precisely quantified in the field of AIT (7), where the Kolmogorov complexity K(p) of a string p is formally defined as a shortest program that generates p on a suitably chosen universal Turing machine (UTM). 

From AIT the authors produce a bound (equation 1, and below), that exhibits the exponential decay of probability with complexity, similar to that seen in their graphs, such as the one shown below, for a model gene regulatory network that is modeled by 60 ordinary differential equations (ODEs). The red dashed line is the bound below.

𝑃(𝑝)2𝑎𝐾˜(𝑝)𝑏,  [1


Scaled frequency vs. complexity for the budding yeast ODE cell cycle model (30). Phenotypes are grouped by complexity of the time output of the key CLB2/SIC1 complex concentration. Higher frequency means a larger fraction of parameters generate this time curve. The red circle denotes the wild-type phenotype, which is one of the simplest and most likely phenotypes to appear. The dashed line shows a possible upper bound from Eq. 1. There is a clear bias toward low-complexity outputs.

One minor comment is that I was surprised that the authors did not reference the classic 1956 paper by Crick and Watson. They introduced the concept of "genetic economy". Prior to any knowledge of the actual structure of virions, they predicted that virions would have icosahedral symmetry because that reduced the cost of the genome coding for the structure of the virion.

Hence, it would be interesting to explore the relationship between the PNAS paper and this one.
There is a nice New York Times article about the PNAS paper. I thank Sophie van Houtryve for bringing that to my attention leading me to the PNAS paper.

Thursday, August 19, 2021

Einstein on big questions

The mere formulation of a problem is far more essential than its solution, which may be merely a matter of mathematical or experimental skills.

To raise new questions, new possibilities, to regard old problems from a new angle, requires creative imagination and marks real advance in science.

I am enough of an artist to draw freely upon my imagination. Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.

Albert Einstein and Leopold Infeld (1938), The Evolution of Physics

I recently encountered this quotation in The Poetry and Music of Science: Comparing Creativity in Science and Art by Tom McLeish. I have heard many times the "Imagination is more important than knowledge" quote, sometimes as a dubious justification for dubious ideas. However, I did not know the context. 

My postdoctoral advisor, John Wilkins tried to drill into me, the idea in the first paragraph, that just coming up with a well-defined formulation of a problem could be a significant advance. This idea certainly had some impact on me, since I sometimes hear my non-scientist wife quote it!

On reflection, I am afraid that I too easily lose sight of this priority of defining problems, just like the method of multiple alternative hypotheses. Good science is hard.

Why am I reading this article? What question am I trying to answer?

Why am I writing this paper? What question am I trying to answer?

What is the problem I assigning a student to work on? Is it well-formulated?

Defining good research questions is hard work and requires discipline.


Thursday, September 10, 2020

Emergence, surprises, and the future of condensed matter physics

 Where is condensed matter physics heading? Does it have a bright future? What are the big questions the field aims to (and might actually) address? What might we predict?

I need to address these kinds of questions in the last chapter of Condensed Matter Physics: A Very Short Introduction.

Here are three different perspectives.

1. Incremental advances.

We will continue to make advances on many fronts: chemical synthesis, device fabrication, experimental techniques, theory, computation, intellectual synthesis, connections with other disciplines, and technological applications. The basic intellectual structure of the discipline is in place. In the framework of Thomas Kuhn, it is "normal science" and we don't expect any "paradigm shifts." John Horgan provocatively proclaimed a quarter of a century ago that it is The End of Science.

2. Hype.

All of the forthcoming incremental advances will combine together to produce a revolution: materials by design. Suppose we want a material with specific properties, e.g., room temperature superconductivity with a high critical current density, and processible into durable wires.... We put this information into the computer and it will tell us the chemical composition, synthesis method, crystal structure, and material properties.

3. We don't know. Expect big surprises as we explore the endless frontier.

Condensed matter physics is all about emergent phenomena. By definition, emergent phenomena are hard to predict, even when you know many (or all) of the details of the system components and their interactions. They are often surprising. Sometimes we can explain (or at least rationalise) them a posteriori (after the fact) by rarely a priori

Just consider some of the long list of exotica from the past four decades: quantum Hall effects, many new classes of superconductors (heavy fermion, organic, cuprate, iron-based, buckyballs, cobaltates, ..), non-Fermi liquid metals, topological insulators, graphene, twisted graphene, colossal magnetoresistance, spin ices, macroscopic quantum tunneling magnets, superconducting qubits, ... Note that almost all of these were experimental discoveries first. Theorists may have had some inklings and broad suggestions of what to look for and where. However, that is quite different from there being consensus and expectation. For example, compare and contrast these discoveries with the case of the experimental discovery of the Higgs boson. It really wasn't that surprising and there was a strong consensus among theorists; both that it would be there and what specific properties it would have.

Perhaps, serendipity remains the best method of discovery.

What's next? Who knows?!

All I am game to predict is that CMP will continue to be an exciting discipline with many surprises and intellectual challenges.

What do you think?

Friday, February 1, 2019

My biggest questions about spin crossover compounds

Most of the questions are inter-related. Most have been discussed in earlier posts.

How do we tune physical properties (e.g. hysteresis width) by varying chemical composition?

How do we understand two-step transitions? Are they associated with spatially inhomogeneous arrangements of the spin?

Are spin ice phases possible?

What is the physical origin of the intermolecular interactions that lead to a first-order transition?
Is it electronic (magnetic) and/or elastic?
Are there long-range interactions? Are they crucial?

Is there a simple way to understand the change in vibrational spectra (and thus entropy) associated with the transition?

What is the role of spatial anisotropy?

What is the simplest possible effective model Hamiltonian that captures the physical properties above?
Can the elastic degrees of freedom be "integrated out" to give a "simple" Ising model?
How do the model parameters depend on structural and chemical composition?

Saturday, June 23, 2018

The discipline of defining good research questions

I have a friend who works in a small college that offers Masters degrees in the humanities. In one program each student must do a thesis on a research topic over the course of a year. My friend spends a lot of time with the students, both individually and as a group, posing and refining a single question for each of their research projects. Last year while visiting I observed one of these sessions and also to have some discussions with individual students about their questions.

The stages are roughly this.

1. The student picks a specific research topic.
2. The student proposes a specific question about the topic that they will aim to answer.
3. The student meets with their advisor to refine the question. Often this involves making it more specific and narrow so that it is manageable.
4. The student presents their question to the class (often about five students) who then discuss it and try and refine it further.
5. With this feedback the student again refines it.
6. The student meets their advisor for a final discussion and agreement about the question.
7. The student starts research.

The questions can start with How, What, When, or Why?
Often, Why is preferred, because it may mean going deeper.

Several things struck me about this practise, particularly seeing it first hand.
First, how valuable it was in terms of ending up with questions that were more interesting, precise, valuable, and manageable.
Second, how valuable this was for the students in terms of learning to think more critically.
Third, how little I think we do this in science.

I don't think the key thing here is that it is a humanities practise. Rather, I think it is that the complete ethos of the college is teaching and training students.

My experience is that we tend to just pick topics for students and suggest they measure or calculate something and see what happens. We may mention a question but we don't refine it or keep coming back to it. Similar concerns apply to many grant applications. It is often not clear whether they are really aiming to provide definitive answers to any questions. I think that there are two big obstacles to us following this procedure: it is hard work and the "publish or perish" culture.

Some of this relates to the challenges of falsifiability and the method of multiple alternative hypotheses.

One (maybe) obvious caveat. Although one starts with this question, as the research proceeds, one may choose to or need to modify the question as one learns more.

What do you think?
Is this something we could be doing better?

Friday, December 8, 2017

Four distinct responses to the cosmological constant problem

One of the biggest problems in theoretical physics is to explain why the cosmological constant has the value that it does.
There are two aspects to the problem.
The first problem is that the value is so small, 120 orders of magnitude smaller than what one estimates based on the quantum vacuum energy!
The second problem is that the value seems to be finely tuned (to 120 significant figures!) to the value of the mass energy.

The problems and proposed (unsuccessful) solutions are nicely reviewed in an article written in 2000 by Steven Weinberg.

There seem to be four distinct responses to this problem.

1. Traditional scientific optimism.
A yet to be discovered theory will explain all this.

2. Fatalism. 
That is just the way things are. We will never understand it.

3. Teleology and Design.
God made it this way.

4. The Multiverse.
This finely tuned value is just an accident. Our universe is one of zillions possible. Each has different fundamental constants.

It is is amazing how radical 2, 3, and 4, are.

I have benefited from some helpful discussions about this with Robert Mann. There is a Youtube video where we discuss the multiverse. Some people love the video. Others think it incredibly boring. I think we are both too soft on the multiverse.

Tuesday, November 7, 2017

Social qualities emerge from multiple interactions at multiple scales

Different qualities are used to describe and characterise societies: civil, fair, intolerant, racist, corrupt, free,  ….

Two big questions are:

How does a society make a transition between from a bad quality and a good quality?

What kind of initiatives can induce changes?

Initiatives can be individual or collective, political or economic, local or national, ...

For example, how does reduce corruption, which is endemic in many Majority world countries?
Or in the USA, why is public debate losing civility?

I think it is helpful to acknowledge the complexity of these issues. They have some similarity to wicked problems. They are problems that involve multiple interactions at multiple scales. Some of these interactions are competing and frustrated (in the spin glass sense!) and initiatives can lead to unintended consequences.

Whether you look at societies from a sociological, cultural, geographical, political, or economic perspective they involve multiple scales. For example, at the political level, one goes from local to city to state to national governments to the United Nations. In some countries corruption (bribes, extortion, nepotism, tax evasion,…) occurs at all levels. A policeman demands a bribe for a traffic violation. A university administrator changes records so his nephew, a mediocre student, can be admitted to medical school. The president of the country moves millions of dollars in foreign aid money into an off-shore bank account….
These phenonmena occur at multiple scales and involve multiple interactions. For example, an individual citizen will interact with many levels of government, and government agencies, and with each may be involved or impacted by a corrupt interaction.

Civility (respect, graciousness, politeness, listening) or uncivility (disrespect, rudeness, contempt, shouting) also occurs at many levels. These range from everyday conversations, comments on Facebook, to debate in parliament, to the Twitter feed of the President of the USA.

Michel Foucault, is one of the most influential (for better or worse) scholars in the humanities from the 20th century. He is particularly well known for his arguments that power operates at many levels and in many different ways in societies.

I find a multi-scale perspective helpful because it undercuts two extreme but common views concerning how we address significant social problems.
One view is the “top-down” perspective that if we just have the right national leader and the right laws a problem will be solved. This is argued for a whole range of issues ranging from corruption to sexual harassment, to “hate speech”.
The other extreme is the “bottom-up” view that the problem can be solved by individuals just making the right choices. Each individual should be polite to others and not give or take bribes. We need both approaches.

Moreover, I believe we need initiatives at all levels and interactions.
The importance of the absence of the intermediate scales (and the associated concept of social capital) was highlighted in Bowling Alone: The Collapse and Revival of American Community, by the Harvard political scientist Robert D. Putnam.
An example of a multi-scale perspective is in the Oxfam book, From Poverty to Power: How active citizens and effective states can change the world.

A question that is both practically important and intellectually fascinating is:

What are the critical parameters and their values at which a society undergoes a “phase transition”?

Such a question is addressed in
The Epidemics of Corruption 
Ph. Blanchard, A. Krueger, T. Krueger, P. Martin



The figure is from a paper, Small-World Networks of Corruption.

Monday, April 24, 2017

Have universities lost sight of the big questions and the big picture?

Here are some biting critiques of some of the "best" research at the "best" universities, by several distinguished scholars.
The large numbers of younger faculty competing for a professorship feel forced to specialize in narrow areas of their discipline and to publish as many papers as possible during the five to ten years before a tenure decision is made. Unfortunately, most of the facts in these reports have neither practical utility nor theoretical significance; they are tiny stones looking for a place in a cathedral. The majority of ‘empirical facts’ in the social sciences have a half-life of about ten years.
Jerome Kagan [Harvard psychologist], The Three Cultures Natural Sciences, Social Sciences, and the Humanities in the 21st Century
[I thank Vinoth Ramachandra for bringing this quote to my attention].
[The distinguished philosopher Alasdair] MacIntyre provides a useful tool to test how far a university has moved to this fragmented condition. He asks whether a wonderful and effective undergraduate teacher who is able to communicate how his or her discipline contributes to an integrated account of things – but whose publishing consists of one original but brilliant article on how to teach – would receive tenure. Or would tenure be granted to a professor who is unable or unwilling to teach undergraduates, preferring to teach only advanced graduate students and engaged in ‘‘cutting-edge research.’’ MacIntyre suggests if the answers to these two inquiries are ‘‘No’’ and ‘‘Yes,’’ you can be sure you are at a university, at least if it is a Catholic university, in need of serious reform. I feel quite confident that MacIntyre learned to put the matter this way by serving on the Appointment, Promotion, and Tenure Committee of Duke University. I am confident that this is the source of his understanding of the increasing subdisciplinary character of fields, because I also served on that committee for seven years. During that time I observed people becoming ‘‘leaders’’ in their fields by making their work so narrow that the ‘‘field’’ consisted of no more than five or six people. We would often hear from the chairs of the departments that they could not understand what the person was doing, but they were sure the person to be considered for tenure was the best ‘‘in his or her field."
Stanley Hauerwas, The State of the University, page 49.

Are these reasonable criticisms of the natural sciences?

Tuesday, March 21, 2017

Emergence frames many of the grand challenges and big questions in universities

What are the big questions that people are (or should be) wrestling within universities?
What are the grand intellectual challenges, particularly those that interact with society?

Here are a few. A common feature of those I have chosen is that they involve emergence: complex systems consisting of many interacting components produce new entities and there are multiple scales (whether length, time, energy, the number of entities) involved.

Economics
How does one go from microeconomics to macroeconomics?
What is the interaction between individual agents and the surrounding economic order?
A recent series of papers(see here and references therein) have looked at how the concept of emergence played a role in the thinking of Friedrich Hayek.

Biology
How does one go from genotype to phenotype?
How do the interactions between many proteins produce a biochemical process in a cell?


The figure above shows a protein interaction network and taken from this review.

Sociology
How do communities and cultures emerge?
What is the relationship between human agency and social structures?

Public health and epidemics
How do diseases spread and what is the best strategy to stop them?

Computer science
Artificial intelligence.
Recently it was shown how Deep learning can be understood in terms of the renormalisation group.

Community development, international aid, and poverty alleviation
I discussed some of the issues in this post.

Intellectual history
How and when do new ideas become "popular" and accepted?

Climate change

Philosophy
How do you define consciousness?

Some of the issues are covered in the popular book, Emergence: the connected lives of Ants, Brains, Cities, and Software.
Some of these phenomena are related to the physics of networks, including scale-free networks. The most helpful introduction I have read is a Physics Today article by Mark Newman.

Given this common issue of emergence, I think there are some lessons (and possibly techniques) these fields might learn from condensed matter physics. It is arguably the field which has been the most successful at understanding and describing emergent phenomena. I stress that this is not hubris. This success is not because condensed matter theorists are smarter or more capable than people working in other fields. It is because the systems are "simple" enough and the presence (sometimes) of a clear separation of scales that they are more amenable to analysis and controlled experiments.

Some of these lessons are "obvious" to condensed matter physicists. However, I don't think they are necessarily accepted by researchers in other fields.

Humility.
These are very hard problems, progress is usually slow, and not all questions can be answered.

The limitations of reductionism.
Trying to model everything by computer simulations which include all the degrees of freedom will lead to limited progress and insight.

Find and embrace the separation of scales.
The renormalisation group provides a method to systematically do this. A recent commentary by Ilya Nemenman highlights some recent progress and the associated challenges.

The centrality of concepts.

The importance of critically engaging with experiment and data.
They must be the starting and end point. Concepts, models, and theories have to be constrained and tested by reality.

The value of simple models.
They can give significant insight into the essentials of a problem.

What other big questions and grand challenges involve emergence?

Do you think condensed matter [without hubris] can contribute something?

Monday, November 14, 2016

Why are the macroscopic and microscopic related?

Through a nice blog post by Anshul Kogar,
I became aware of a beautiful Physics Today Reference Frame (just 2 pages!) from 1998 by Frank Wilczek
Why are there Analogies between Condensed Matter and Particle Theory?

It is worth reading in full and slowly. But here a few of the profound ideas that I found new and stimulating.

A central result of Newton's Principia was
"to prove the theorem that the gravitational force exerted by a spherically symmetric body is the same as that due to an ideal point of equal total mass at the body's center. This theorem provides quite a rigorous and precise example of how macroscopic bodies can be replaced by microscopic ones, without altering the consequent behavior. " 
More generally, we find that nowhere in the equations of classical mechanics [or electromagnetism] is there any quantity that fixes a definite scale of distance.
Only with quantum mechanics do fundamental length scales appear: the Planck length, Compton wavelength, and Bohr radius.

Planck's treatment of blackbody radiation [macroscopic phenomena] linked it to microscopic energy levels.

Einstein then performed a similar link between the specific heat of a crystal and the existence of phonons: the first example of a quasi-particle.

Aside: I need to think of how these two examples do or do not fit into the arguments and examples I give in my emergent quantum matter talk.

Wilczek says
it is certainly not logically necessary for there to be any deep resemblance between the laws of a macroworld and those of the microworld that produces it  
an important clue is that the laws  must be" upwardly heritable" 
[This is Wilczek's own phrase which does not seem to have been picked up by anyone later, including himself.]
the most basic conceptual principles governing physics as we know it - the principle of locality and the principle of symmetry  .... - are upwardly inheritable.
He then adds the "quasi material nature of apparently empty space."

Overall, I think my take might be a little different. I think the reason for the analogies in the title are that there are certain organising principles for emergence [renormalisation, quasi-particles, effective Hamiltonians, spontaneous symmetry breaking] that transcend energy and length scales. The latter are just parameters in the theory. Depending on the system they can vary over twenty orders of orders of magnitude (e.g., from cold atoms to quark-gluon plasmas).

But, perhaps Wilczek would say that once you have symmetry and locality you get quantum field theory and the rest follows....

What do you think?

What is condensed matter physics?

 Every day we encounter a diversity of materials: liquids, glass, ceramics, metals, crystals, magnets, plastics, semiconductors, foams, … Th...