Monday, November 11, 2019

Tuning the dimensionality of spin-crossover compounds

An important question concerning spin-crossover compounds concerns the origin and the magnitude of the interactions between the individual molecular units.

There is a nice paper
Evolution of cooperativity in the spin transition of an iron(II) complex on a graphite surface
Lalminthang Kipgen, Matthias Bernien, Sascha Ossinger, Fabian Nickel, Andrew J. Britton, Lucas M. Arruda, Holger Naggert, Chen Luo, Christian Lotze, Hanjo Ryll, Florin Radu, Enrico Schierle, Eugen Weschke, Felix Tuczek, and Wolfgang Kuch

An impressive achievement is the control of the number of monolayers (ML) of SCO molecules deposited on a highly oriented surface pyrolytic graphite. The coverage varies between 0.35 and 10 ML. The shape of the spin-crossover curve changes significantly as the number of monolayers varies, as shown in the upper panel below.

The natural interpretation is that as the number of monolayers increases the interaction between molecules (co-operativity) increases. This can be quantified in terms of the parameter Gamma in the Slichter-Drickamer model [which is equivalent to a mean-field treatment of an Ising model], with Gamma = 4 z J where z=number of nearest-neighbours and J=Ising interaction.
The blue curve in the lower panel shows the variation of Gamma with ML.

The figure above and Table 1 shows that for ML=0.35, Gamma=-0.44 kJ/mol is almost zero for ML=0.7, and then monotonically increases to 2.1 kJ/mol for the bulk.

Does that make sense?

The magnitude of the Gamma values is comparable to those found in other compounds.

The negative value of Gamma for ML=0.35 might be explained as follows. Suppose a monolayer consists of SCO molecules arranged in a square lattice. Then ML=0.33 will consist of chains of SCO molecules that interact in the diagonal direction. If the J_nnn for this next-nearest neighbour interaction is negative then the Gamma value will be negative.

For a monolayer on a square lattice, Gamma= 16 (J_nn + J_nnn). J_nn will be positive and so if it is comparable in magnitude to J_nnn then Gamma will be small for a monolayer.

For a bilayer, Gamma = 16 (J_nn + J_nnn) + 4 J_perp, where J_perp is the interlayer coupling.
For the bulk, Gamma = 16 (J_nn + J_nnn) + 8 J_perp.

This qualitatively explains the trends, but not quantitatively.

The authors also note that the values of Delta E and Delta S obtained from their data vary little with the coverage, as they should since these parameters are single-molecule properties. This also means that the crossover temperature, T_sco also varies little with coverage.

A more rigorous approach is to not use mean-field theory, but rather consider a slab of layers of Ising models. The ratio of the transition temperature T_c to J_nn increases from 2.27 for a single layer to 4.5 as the dimensionality increases from d=2 to d=3.
[In contrast, for mean-field theory the ratio increases from 4 to 6].

If the crossover temperature T_sco is larger than T_c, [as it must be if there is no hysteresis] and assuming J_nn does not change with coverage, then as the coverage increases the crossover temperature becomes closer to the critical temperature and the transition curve will become steeper, reflected in a smaller transition width Delta T (and a correspondingly larger effective Gamma in the Slichter-Drikamer fit). This claim can be understood by looking at the last Figure in this post.

Thursday, November 7, 2019

Oral exams need not be like a visit to the dentist

Oral exams (vivas) are quite common for most postgraduate degrees involving research. The basic goal is to provide an efficient mechanism for the examiners to determine a student's level of understanding of what they have done. Most committees comprise both experts and non-experts. Most are actually quite friendly. If the non-experts learn something new they will be happy. Sometimes an examiner may ``grill'' a student simply because they want to understand what is going on. I think the main reason thinks occasionally get tense is when there is a member of the committee who has a poor relationship with the student's advisor or doesn't think much of their research.

To prepare take any opportunity to attend another student's oral exam or ask them about what questions  they were asked and tips.

Some common mistakes that students make are to assume:

Everyone on the committee has read the thesis in detail.

The committee is going to ask highly technical and nuanced questions.

Committee members don't appreciate that I am nervous.

If I can't answer a question it is a disaster.

I should put a positive spin on everything I have done.

Many of the questions asked are usually along the lines of the following.

What is the most important result that you obtained?

How is this work original?

What is the biggest weakness of your approach?

What direction do would you suggest for a student who wishes to build on your work?

What are your plans to publish this work?

Any other suggested advice?

Saturday, November 2, 2019

Academic publishing in Majority World

I was asked for an update on this. The challenges are formidable, but not insurmountable.
Here are slides from a talk on the subject.
As always, it is important not to reinvent the wheel.
There are already some excellent resources and organisations. 

A relevant organisation is AuthorAID which is related to inasp, and has online courses on writing. People I know who have taken these courses, or acted as mentors, speak highly of them.

Authors should also make use of software to correct English such as Grammarly.

Publishing Scientific Papers in the Developing World is a helpful book, stemming from a 2010 conference.
Erik Thulstrup has a nice chapter "How should a Young Researcher Write and Publish a Good Research Paper?"

Friday, November 1, 2019

The central role of symmetry in condensed matter

I have now finished my first draft of chapter 3, of Condensed Matter Physics: A Very Short Introduction. 


I welcome comments and suggestions. However, bear in mind my target audience is not the typical reader of this blog, but rather your non-physicist friends and family. 
I think it still needs a lot of work. The goal is for it to be interesting, accessible, and bring out the excitement and importance of condensed matter physics.

This is quite hard work, particularly to try and explain things in an accessible manner.
I am also learning a lot.

I have a couple of basic questions.

How is the symmetry of the rectangular lattice and the centred lattice different?

When was the crystal structure of ice determined by X-ray diffraction?
[Pauling proposed the structure in 1935.]


Thursday, October 24, 2019

Many-worlds cannot explain fine tuning

There are several independent lines of argument that are used to support the idea of a multiverse: the many-worlds interpretation of quantum mechanics, the ``landscape problem'' in string theory, and the fine-tuning of fundamental physical constants. Previously, I wrote about four distinct responses to the fine-tuning of the cosmological constant.

I was recently trying to explain the above to a group of non-physicists. One of them [Joanna] had the following objection that I had not heard before. Schrodinger's cat can only exist in one universe within the multiverse. The multiverse involves zillions of universes. However, because of fine-tuning carbon-based life is so improbable that it can only exist in one (or maybe a handful?) of the universes, within the multiverse. Thus, when one observes whether the cat is dead or alive, and the universe ``branches" into two distinct universes, one with a dead cat and the other with a living cat, there is a problem. It is possible that many-worlds interpretation is still correct, but it does not seem possible to claim that many-worlds and the multiverse needed to ``explain'' fine-tuning are the same type of multiverse.

One response might be that Schrodinger's cat is just a silly extrapolation of entanglement to the macroscopic scale. However, the problem remains. Just consider radioactive decay of atoms. Each decay of a single atom should be associated with branching to two distinct universes. Both of those universes are identical, except for whether that single atom has decayed or not. Over history, zillions of radioactive decays have occurred. This means that there are zillions of universes almost identical to the one we live in right now. But, all these zillion universes are fine-tuned to be just like ours.

Is there a problem with this argument?

Addendum. (25 October, 2019).
Fine-tuning got a lot of attention after the 1979 Nature paper, The anthropic principle and the structure of the physical world, by Bernard Carr and Martin Rees.
The end of the article makes explicit the connection with the many-worlds interpretation of quantum theory.
...nature does exhibit remarkable coin- cidences and these do warrant some explanation.... the anthropic explanation is the only candidate and the discovery of every extra anthropic coincidence increases the post hoc evidence for it. The concept would be more palatable if it could be given a more physical foundation. Such a foundation may already exist in the Everett 'many worlds' interpretation of quantum mechanics, according to which, at each observation, the Universe branches into a number of parallel universes, each corresponding to a possible outcome of the observation. The Everett picture is entirely consistent with conventional quantum mechanics; it merely bestows on it a more philosophically satisfying interpretation. There may already be room for the anthropic principle in this picture. 
Wheeler envisages and infinite ensemble of universes all with different coupling constants and so on. Most are 'still-born', in that the prevailing physical laws do not allow anything interesting to happen in them; only those which start off with the right constants can ever become 'aware of themselves'. One would have achieved something if one could show that any cognisable universe had to possess some features in common witti our Universe. Such an ensemble of universes could exist in the same sort of space as the Everett picture invokes. Alternatively, an observer may be required to 'collapse' the wave function. These arguments go a little way towards giving the anthropic principle the status of a physical theory but only a little: it may never aspire to being much more than a philosophical curiosity...
In a review of a book based on a conference about the multiverse, Virginia Trimble states that:
There is also among the authors strong divergence of opinion on whether Hugh Everett's version of many worlds is (just) a quantum multiverse (Tegmark), almost certainly correct and meaningful (Page), or almost certainly wrong or meaningless (Carter). 

Tuesday, October 8, 2019

2019 Nobel Predictions

It is that time of year again. I have not made predictions for a few years.

For physics this year I predict
Experiments for testing Bell inequalities and elucidating the role of entanglement in quantum physics
Alan Aspect, John Clauser, and Anton Zeilinger
They received the Wolf Prize in 2010, a common precursor to the Nobel.

My personal preference for the next Nobel for CMP would be centred around Kondo physics, since that is such a paradigm for many-body physics, maybe even comparable to BCS.

Kondo effect and heavy fermions
Jun Kondo, Frank Steglich, David Goldhaber-Gordon

Arguably the latter two might be replaced with others who worked on heavy fermions and/or Kondo in quantum dots.
Steglich discovered heavy fermion superconductivity.
Goldhaber-Gordon realised tuneable Kondo and Anderson models in quantum dots (single-electron transistors).

Unlike many, I still remain to be convinced that topological insulators is worthy of a Nobel.

For chemistry, my knowledge is more limited. However, I would go for yet another condensed matter physicist to win the chemistry prize: John Goodenough, inventor of the lithium battery.
He also made seminal contributions to magnetism, random access memories, and strongly correlated electron materials.

What do you think?

Postscripts (October 10).

I got confused about the day of the physics prize and I think when I posted my ``prediction'' the prize may have already been announced.

A few years ago I read Goodenough's fascinating autobiography. It was actually in that book that I learned about U. Chicago requiring PhD students to publish a single author paper. This observation featured in my much commented on recent post about PhD theses.

I also have a prediction for the Peace Prize. First, I hope it is not Greta Thunberg, as much as I admire her and agree with the importance of her cause. I worry whether it may ruin her life.
My wife suggested the Prime Minister of Ethiopia, Abiy Ahmed and the President of Eritrea, Isaias Afwerki. I find it truly amazing what Ahmed has achieved.
Another great choice would be some of the leaders of Armenia, which has seen significant increases in human rights, political freedoms, and freedom the press. It was selected as The Economist's country of the year in 2018.

Postscript (October 30).
I was really happy about the economics prize. Six years ago, I read Poor Economics, by Banerjee and Duflo, with my son (an economics student), and blogged about it. Below a respond to a commenter who was critical of this prize.

Estimating the Ising interaction in spin-crossover compounds

I previously discussed how one of the simplest model effective Hamiltonians that can describe many physical properties of spin-crossover compounds is an Ising model in an external "field". The s_i=+/-1 is a pseudo-spin denoting the low-spin (LS) and high-spin (HS) states of a transition metal molecular complex at site i.
The ``external field" is one half of the Gibbs free energy difference between the LS and HS states. The physical origin of the J interaction is ``believed to be'' elastic, not magnetic interactions. A short and helpful review of the literature is by Pavlik and Boca.

Important questions are:

1. What is a realistic model that can explain how J arises due to elastic interactions?
2. How does one calculate J from quantum chemistry calculations?
3. How does one estimate J for a specific material from experimental data?
4. What are typical values of J?

I will focus on the last two questions.
One can do a mean-field treatment of the Ising model, leading to a model free energy for the whole system that has the same form as that of an ideal binary mixture of two fluids where
x = (1 + av(s_i))/2, is the relative fraction of low spins. 
This model free energy was proposed in 1972 by Slichter and Drickmamer.
The free energy of interaction between the two "fluids" is of the form -Gamma x^2.
Gamma is often referred to as the ``co-operativity" parameter.
Minimising the free energy versus x gives a self-consistent equation for x(T).
This can be compared to experimental data for x vs T, e.g. from the magnetic susceptibility, and a Gamma value extracted for a specific material.

Values for Gamma obtained in this way for a wide-range of quasi-one-dimensional materials [with covalent bonding (i.e. strong elastic interactions) between spin centres] are given in Tables 1 and 2 of Roubeau et al. The values of Gamma are in the range 2-10 kJ/mol. In temperature units this corresponds to 240-1200 K.

My calculations [which may be wrong] give that Gamma = 4 J z, where z is the number of nearest neighbours in the Ising model. This means that (for a 1d chain with z=2) that J is in the range of 0.3-1.5 kJ/mol, or 30-150 K.

In many spin-crossover materials, the elastic interactions are via van der Waals, hydrogen bonding, or pi-stacking interactions. In that case, we would expect smaller values of J.
This is consistent with the following.
An analysis of a family of alloys by Jakobi et al. leads to a value of Gamma of 2 kJ/mol.
[See equation 9b. Note B=Gamma=150 cm^-1.  Also in this paper x is actually denoted gamma and x denotes the fraction of Zn in the material.].

I thank members of the UQ SCO group for all they are teaching me and the questions they keep asking.

Tuesday, October 1, 2019

Marks of an excellent PhD thesis

As years go by the PhD thesis in science and engineering is less and less of a ``thesis'' and more just a box to tick. There was a time when the thesis was largely the work of the student and tackled one serious problem. Decades ago at the University of Chicago, students were meant to write a single author paper that was based on their thesis.
At some universities, including my own, students can now staple several papers together, write an introductory chapter, and submit that as a thesis. One obvious problem with that system is the question of how large was the contribution of the student multi-author papers, both in terms of the writing and doing the experiments or calculations.

Previously I have argued that A PhD is more than a thesis, a PhD should involve scholarship, and a thesis should suggest future directions and be self-critical. In some sense these posts were negative, focusing on what may be missing. Here I just want to highlight several positive things I recently saw in a thesis.

A coherent story
The thesis should be largely about one thing looked at from several angles. It should not be ``several random topics that my advisor got excited about in the past 3 years.''

Meticulous detail
This should cover existing literature. More importantly, there should be enough detail that the next student can use the thesis as a reference to learn all the background to take the topic further.

Significant contributions from the student
A colleague once said that a student is ready to submit the thesis when they know more about the thesis topic than their advisor.

The situation in the humanities is quite different. Students largely work on their own and write a thesis that they hope will eventually become a book.

I think the decline of the thesis reflects a significant shift in the values of the university as a result of neoliberalism. The purpose of PhDs is no longer the education of the student, but rather to have low-paid research assistants for faculty to produce papers in luxury journals that will attract research income and boost university rankings.

What do you think are the marks of an excellent PhD thesis?

Thursday, September 26, 2019

Symmetry is the origin of all interactions

In Phil Anderson's review of Lucifer's Legacy: The Meaning of Asymmetry by Frank Close, Anderson makes the following profound and cryptic comment.
In a book focusing, as this does, on symmetry, it seems misleading not to explain the fundamental principle that all interaction follows from symmetry: the gauge principle of London and Weyl, modelled on and foreshadowed by Einstein's derivation of gravity from general relativity (Einstein seems to be at the root of everything). The beautiful idea that every continuous symmetry implies a conservation law, and an accompanying interaction between the conserved charges, determines the structure of all of the interactions of physics. It is not appropriate to try to approach advanced topics such as electroweak unification and supersymmetry without this foundation block.
To see how this plays out in electrodynamics see here.

Tuesday, September 24, 2019

A pioneering condensed matter physicist

In terms of institutional structures, Condensed Matter Physics did not really exist until the 1970s. A landmark being when the Division of Solid State Physics of the American Physical Society changed its name. On the other hand, long before that people were clearly doing CMP! If we think of CMP as a unified approach to studying different states of matter that enterprise began in earnest during the twentieth century.

Kamerlingh Onnes (1853-1924) was a pioneer in low-temperature physics but is best known for the discovery of superconductivity in 1911. In many ways, Onnes embodied the beginning of an integrated and multi-faceted approach to CMP: development of experimental techniques, the interaction of theory and experiment, and addressing fundamental questions.

1. Onnes played the long game, spending years developing and improving experimental methods and techniques, whether glass blowing, sample purification, or building vacuum pumps. He realized that this approach required a large team of technicians, each with particular expertise and that teamwork was important. The motto of Onnes’ laboratory was Door meten tot weten (Through measurement to knowledge). Techniques were a means to a greater end.

2. In Leiden, Onnes sought out theoretical advice from his colleague Johannes van der Waals (1837-1923).  [Almost 10 years ago I gave a talk about van der Waals legacy].

3. Onnes’ experiments were driven by a desire to answer fundamental questions. Questions he helped answer included the following.
Can any gas become liquid?
For gases is there a universal relationship between their density, pressure, and temperature?
How are gas-liquid transitions related to interactions between the constituent molecules in a material? At very low temperatures is the electrical conductivity of a pure metal zero, finite, or infinite?

The first of these questions motivated Onnes to pursue being the first to cool helium gas to low enough temperatures that it would become liquid. At the time all other known gases had been liquified. In 1908 his group observed that helium became liquid at a temperature of 4.2 K. This discovery was of both fundamental importance and great practical significance. Liquid helium became extremely useful in experimental physics and chemistry as a means to cool materials and scientific instruments. Indeed liquid helium enabled the discovery of superconductivity, which resulted from addressing the last question.


The figure shows Onnes (left) in his lab with van der Waals.

The discussion above closely follows Steve Blundell's Superconductivity: A Very Short Introduction.

Friday, September 20, 2019

Common examples of symmetry breaking

In his beautiful book, Lucifer's Legacy: The Meaning of Asymmetry, Frank Close gives several nice examples of symmetry breaking that make the concept more accessible to a popular audience.

One is shown in the video below. Consider a spherical drop of liquid that hits the flat surface of a liquid. Prior to impact, the system has continuous rotational symmetry about an axis normal to the plane of the liquid and through the centre of the drop. However, after impact, a structure emerges which does not have this continuous rotational symmetry, but rather a discrete rotational symmetry.



Another example that Close gives is illustrated below. Which napkin should a diner take? One on their left or right? Before anyone makes a choice there is no chirality in the system. However, if one diner chooses left others will follow, symmetry is broken and a spontaneous order emerges.


Thursday, August 29, 2019

My tentative answers to some big questions about CMP

In my last post, I asked a number of questions about Condensed Matter Physics (CMP) that my son asked me. On reflection, my title ``basic questions" was a misnomer, because these are actually rather profound questions. Also, it should be acknowledged that the answers are quite personal and subjective. Here are my current answers.

1. What do you think is the coolest or most exciting thing that CMP has discovered? 

Superconductivity.

explained?

BCS theory of superconductivity.
Renormalisation group (RG) theory of critical exponents.

2. Scientific knowledge changes with time. Sometimes long-accepted ``facts''  and ``theories'' become overturned.  What ideas and results are you presenting that you are almost absolutely certain of? 

Phase diagrams of pure substances.
Crystallography.
Landau theory and symmetry breaking as a means to understand almost all phase transitions.
RG theory.
Bloch's theorem and band theory as a framework to understand the electronic properties of crystals.
Quantisation of vortices.
Quantum Hall effects.
Emergence.

What might be overturned?

I will be almost certain of everything I will write about in the Very Short Introduction. This is because it centers around concepts and theories that have been able to explain a very wide swathe of experiments on diverse materials and that have been independently reproduced by many different groups.
I am deliberately avoiding describing speculative theories and the following.
Ideas, results, and theories based on experiments that did not involve the actual material claimed, involved significant curve fitting, or large computer simulations.
Many things published in luxury journals during the last twenty years.

3. What are the most interesting historical anecdotes? 

These are so interesting and relevant to major discoveries that they are worth including in the VSI.
Graphene and sellotape.
Quasi-crystals.
Bardeen's conflict with Josephson.
Abrikosov leaving his vortex lattice theory in his desk drawer because Landau did not like it.

What are the most significant historical events? 

Discovery of x-ray crystallography
Discovery of superconductivity.
Landau's 1937 paper.
BCS paper.
Wilson and Fisher.

Who were the major players?

They are so important that they are worthy of a short bio in the text.
Onnes.
Landau.
Bardeen.
Anderson.
Wilson.

4. What are the sexy questions that CMP might answer in the foreseeable future?

Is room-temperature superconductivity possible?

Friday, August 23, 2019

Basic questions about condensed matter

I am trying out draft chapters of Condensed matter physics: A very short introduction, on a few people who I see as representative of my target audience. My son is an economist but has not studied science beyond high school. He enjoys reading widely. He kindly agreed to give me feedback on each draft chapter. Last week he read the first two chapters and his feedback was extremely helpful. He asked me several excellent questions that he thought I should answer.

1. What do you think is the coolest or most exciting thing that CMP has discovered? explained?

2. Scientific knowledge changes with time. Sometimes long-accepted ``facts''  and ``theories'' become overturned? What ideas and results are you presenting that you are almost absolutely certain of? What might be overturned?

3. What are the most interesting historical anecdotes? What are the most significant historical events? Who were the major players?

4. What are the sexy questions that CMP might answer in the foreseeable future?

I have some preliminary answers. But, to avoid prejudicing some brainstorming, I will post later.
What answers would you give?

Tuesday, August 20, 2019

The global massification of universities

A recent issue of The Economist has an interesting article about the massive expansion in higher education, both private and public, in Africa.
The thing I found most surprising and interesting is the graphic below.


It compares the percentage of the population within 5 years of secondary school graduation are enrolled in higher education, in 2000 and 2017. In almost all parts of the world the percentage enrollment has doubled in just 17 years!
I knew there was rapid expansion in China and Africa, but did not realise it is such a global phenomenon.

Is this expansion good, bad, or neutral?
It is helpful to consider the iron triangle of access, cost, and quality. You cannot change one without changing at least one of the others.

I think that this expansion is based on parents, students, governments, and philanthropies holding the following implicit beliefs uncritically. Based on the history of universities until about the 1970s. Prior to that universities were fewer, smaller, more selective, had greater autonomy (both in governance, curriculum, and research).

1. Most students who graduated from elite institutions went on to successful/prosperous careers in business, government, education, ...

2. Research universities produced research that formed the foundation for amazing advances in technology and medicine, and gave profound new insights into the cosmos, from DNA to the Big Bang.

Caution: the first point does not imply that a university education was crucial to the graduates' success. Correlation and causality are not the same thing. The success of graduates may be just a matter of signaling.  Elite institutions carefully selected highly gifted and motivated individuals who were destined for success. The university just certified that the graduates were ``hard-working, smart, and conformist.''

But the key point is these two observations (beliefs) concern the past and not the present. Universities are different.  Massification and the stranglehold of neoliberalism (money, marketing, management, and metrics) mean that universities are fundamentally different, from the student experience to the nature of research.

According to Wikipedia,
Massification is a strategy that some luxury companies use in order to attain growth in the sales of product. Some luxury brands have taken and used the concept of massification to allow their brands to grow to accommodate a broader market.
What do you think?
Are these the key assumptions?
Will massification and neoliberalism undermine them?

Tuesday, August 13, 2019

J.R. Schieffer (1931-2019): quantum many-body theorist

Bob Schrieffer died last month, as reported in a New York Times obituary.

Obviously, Schrieffer's biggest scientific contribution was coming up with the variational wave-function for the BCS theory of superconductivity.
BCS theory was an incredible intellectual achievement on many levels. Many great theoretical physicists had failed to crack the problem. The elegance of the theory was manifest in the fact that it was analytically tractable, yet could give a quantitative description of diverse physical properties in a wide range of materials. BCS also showed the power of using quantum-field-theory techniques in solid state theory. This was a very new thing in the late 50s. Then there was the following cross-fertilisation with nuclear physics and particle physics (e.g. Nambu).

Another significant contribution was the two-page paper from 1966 that used a unitary transformation to connect the Kondo model Hamiltonian to that of the Anderson single impurity model. In particular, it gave a physical foundation for the Kondo model, which at the time was considered somewhat ad hoc.
John Wilkins wrote a nice commentary on the background history and significance of the Schrieffer-Wolff transformation.
The SW transformation is an example of a general strategy of finding an effective Hamiltonian for a reduced Hilbert space. This can also be done via quasi-degenerate perturbation theory. In different words, when one ``integrates out'' the charge degrees of freedom in the Anderson model one ends up with the Kondo model.

There is also the Su-Schrieffer-Heeger model, that is related to Heeger's Nobel Prize in Chemistry. However, although this spawned a whole industry (that I worked in as a postdoc with Wilkins) its originality and significance is arguably not comparable to BCS and SW.

Because of when he was born, like many of the pioneers of quantum many-body theory, Schrieffer may have been born for success?

I am somewhat (scientifically) descended from Schrieffer because I did a postdoc with John Wilkins, who was one of Schrieffer's first PhD students. My main interaction with Schrieffer was during 1995-2000. Each year I would visit my collaborator, Jim Brooks, at the National High Magnetic Field Laboratory, and would have some helpful discussions with Schrieffer. During one of those visits, I stumbled across a compendium of reprints from a Japanese lab. [This was back in the days when some people snail-mailed out such things to colleagues]. It had been sent to Schrieffer and contained a copy of a paper by Kino and Fukuyama on a Hubbard model for organic charge transfer salts. That was the starting point for my work on that topic.

Tuesday, August 6, 2019

What is the mass of a molecular vibration?

This is a basic question that I have been puzzling about. I welcome solutions.

Consider a diatomic molecule containing atoms with mass m1 and m2. It has a stretch vibration that can be described by a harmonic oscillator with a reduced mass mu given by
.
Now consider a polyatomic molecule containing N atoms.
It will have 3N-6 normal modes of vibration.
[The 6 is due to the fact that there are 6 zero-frequency modes: 3 rigid translations and 3 rotations of the whole molecule].
In the harmonic limit, the normal mode problem is solved below.
[I follow the classic text Wilson et al., Molecular Vibrations].
The problem is also solved in matrix form in Chapter 6 of Goldstein, Classical Mechanics].



One now has a collection of non-interacting harmonic oscillators. All have mass = 1. This is because the normal mode co-ordinates have units of length * sqrt(mass).

The quantum chemistry package Gaussian does more. It calculates a reduced mass mu_i for each normal mode i using the formula below.
This is discussed in these notes on the Gaussian web site. From mu_i and the normal mode frequency_i it then calculates the spring constant for each normal mode.

I have searched endlessly, and tried myself, but have not been able to answer the following basic questions:

1. How do you derive this expression for the reduced mass?
2. Is this reduced mass physical, i.e. a measurable quantity?

Similar issues must also arise with phonons in crystals.

Any recommendations?

Tuesday, July 23, 2019

Different approaches to popular science writing

Since I am working on a Very Short Introduction (VSI) to condensed matter physics I am looking at a lot of writing about science for popular audiences. I have noticed several distinct approaches that different authors take. They all have strengths and weaknesses.

Historical
The story of discoveries and the associated scientists is told. A beautiful example is A Short History of Nearly Everything by Bill Bryson.
When done well this approach has many positives. Stories can be fun and easy to read, particularly when they involve quirky personalities, serendipity, and fascinating anecdotes. Furthermore, this shows how hard and messy real science is, and that science is a verb, not just a noun. On the other hand, it can be a bit challenging for readers as they have to understand not just the successes but also why certain theories, experiments, and interpretations were wrong along the way.  Many writers also seem eager to burden readers will all sorts of historical background details about scientists, their families, and their local context. Sometimes these details are interesting. Other times they seem just boring fluff. Generally, most agree that one does not learn and understand a scientific subject best by learning its history. So why take this approach in popular writing?

Literary pleasure
People read novels and watch movies for pleasure. The goal is not necessarily to learn something (or a lot). I would put Brian Cox's writing and documentaries in this category. That is not a criticism. Rather than provide a lot of information I think the goal is more to induce awe, wonder, curiosity, and enjoyment.

Condensed textbook
Take an introductory text and cover all the same topics in the same order. Just cut out technical details and jargon. Lots of analogies are used to explain concepts. The obvious strength is the reader gets a good overview of the subject. The weakness is that this can be boring, involve defining a lot of terminology, and is actually too hard for the reader. One scary consequence is that some readers actually think they now actually understand the subject.

Hype
This comes in several forms. One is that the theory or topic of interest (whether complexity, quantum information, self-organised criticality, sociobiology, ....) is THE answer. It explains everything. The second form of hype is technological: this science is going to lead a new technology that will change the world. Generally, this fits the genre of ``science as salvation''.

Conceptual
An example is Laughlin's A Different Universe. A challenge is that this requires readers to like learning new concepts and have an ability to think abstractly.

Except for hype, I think all of these approaches have their merits. Ideally, one would like to incorporate elements of all of them.

What do you think? Are there other approaches?

Friday, June 28, 2019

The bloody delusions of silicon valley medicine

On a recent flight, I watched the HBO documentary The Inventor: Out for Blood in Silicon Valley. It chronicles the dramatic rise and fall of Elizabeth Holmes, founder of a start-up, Theranos, that claimed to have revolutionised blood testing.



There is a good article in the New Republic
What the Theranos Documentary Misses
Instead of examining Elizabeth Holmes’s personality, look at the people and systems that aided the company’s rise.

In spite of the weaknesses described in that article, the documentary made me think about a range of issues at the interface of science, technology, philosophy, and social justice.

The story underscores Kauzmann's maxim, ``people will often believe what they want to believe rather than what the evidence before them suggests they should believe.''

Truth matters. Eventually, we all bounce up against reality: scientific, technological, economic, legal, ...  It does not matter how much hype and BS one can get away, eventually, it will all come crashing down. It is just amazing that some people seem to get away with it for so long...
This is why transparency is so important. A bane of modern life is the proliferation of Non-Disclosure Agreements. Although, I concede they have a limited role is certain commercial situations, they seem to be now used to avoid transparency and accountability for all sorts of dubious practises in diverse social contexts.

The transition from scientific knowledge to a new technology is far from simple. A new commercial device needs to be scalable, reliable, affordable, and safe. For medicine, the bar is a lot higher than a phone app! 

Theranos had a board featuring ``big'' names in politics, business, and military, such as Henry Kissinger, George Shulz, Daniel Mattis,.. All these old men were besotted with Holmes and more than happy to take large commissions for sitting on the board. Chemistry, engineering, and medical expertise were sorely lacking. However, even the old man with relevant knowledge Channing Robertson was a true believer until the very end.

Holmes styled herself on Steve Jobs and many wanted to believe that she would revolutionise blood testing. However, the analogy is flawed. Jobs basically took existing robust technology and repackaged and marketed it in clever ways. Holmes claimed to have invented a totally new technology. What she was trying to do was a bit like trying to build a Macintosh computer in the 1960s.

Wednesday, June 12, 2019

Macroscopic manifestations of crystal symmetry

In my view, the central question that Condensed Matter Physics (CMP) seeks to answer is:
How do the properties of a distinct phase in a material emerge from the interactions between the atoms of which the material is composed? 
CMP aims to find a connection between the microscopic properties and macroscopic properties of a material. This requires determining three things: what the microscopic properties are, what the macroscopic properties are, and how the two are related. None of the three is particularly straightforward. Historically, the order of discovery is usually: macroscopic, microscopic, connection. Making the connection between microscopic and macroscopic can take decades, as exemplified in the BCS theory of superconductivity.

Arguably, the central concept to describe the macroscopic properties is broken symmetry, which can be quantified in terms of an order parameter. Connecting this microscopics is not obvious. For example, with superconductivity, the sequence of discovery was experiment, Ginzburg-Landau theory, BCS theory, and then Gorkov connected BCS and Ginzburg-Landau.

When we discuss (and teach about) crystals and their symmetry we tend to start with the microscopic, particularly with the mathematics of translational symmetry, Bravais lattices, crystal point groups, ...
Perhaps this is the best strategy from a pedagogical point of view in a physics course.
However, historically this is not the way our understanding developed.
Perhaps if I want to write a coherent introduction to CMP for a popular audience I should follow the historical trajectory. This can illustrate some of the key ideas and challenges of CMP.

So let's start with macroscopic crystals. One can find beautiful specimens that have very clean faces (facets).


Based on studies of quartz, Nicolas Steno in 1669 proposed that ``the angles between corresponding faces on crystals are the same for all specimens of the same mineral".  This is nicely illustrated in the figure below which looks at different cross-sections of a quartz crystal. The 120-degree angle suggests an underlying six-fold symmetry. This constancy of angles was formulated as a law by Romé de l'Isle in 1772.


Rene Just Hauy then observed that when he smashed crystals of calcite that the fragments always had the same form (types of facets) as the original crystal. This suggested some type of translational symmetry, i.e. that crystals were composed of some type of polyhedral unit. In other words, crystals involve a repeating pattern.

The mathematics of repeating units was then worked out by Bravais, Schoenflies, and others in the second half of the nineteenth century. In particular, they showed that if you combined translational symmetries and point group symmetries (rotations, reflections, inversion) that there were only a discrete number of possible repeat structures.

Given that at the beginning of the twentieth century, the atomic hypothesis was largely accepted, particularly by chemists, it was also considered reasonable that crystals were periodic arrays of atoms and molecules. However, we often forget that there was no definitive evidence for the actual existence of atoms. Some scientists such as Mach considered them a convenient fiction. This changed with Einstein's theory of Brownian motion (1905) and the associated experiments of Jean Perrin (1908). X-ray crystallography started in 1912 with Laue's experiment. Then there was no doubt that crystals were periodic arrays of atoms or molecules.

Finally, I want to mention two other macroscopic manifestations of crystal symmetry (or broken symmetry): chirality and distinct sound modes (elastic constants).

Louis Pasteur made two important related observations in 1848. All the crystals of sodium ammonium tartrate that he made could be divided into two classes: one class was the mirror image of the other class. Furthermore, when polarised light traveled through these two classes, the polarisation was rotated in opposite directions. This is chirality (left-handed versus right-handed) and means that reflection symmetry is broken in the crystals. The mirror image of one crystal cannot be superimposed on the original crystal image. The corresponding (trigonal) crystals for quartz are illustrated below.


Aside. Molecular chirality is very important in the pharmaceutical industry because most drugs are chiral and usually only one of the chiralities (enantiomers) is active.

Sound modes (and elasticity theory) for a crystal are also macroscopic manifestations of the breaking of translational and rotational symmetries. In an isotropic fluid, there are two distinct elastic constants and as a result, two distinct sound modes. Longitudinal and transverse sound have different speeds. In a cubic crystal, there are three distinct elastic constants and three distinct sound modes. In a triclinic crystal (which has no point group symmetry) there are 21 distinct elastic constants. Hence, if one measures all of the distinct sound modes in a crystal, one can gain significant information about which of the 32 crystal classes that crystal belongs too. (See Table A.8 here).

Aside: the acoustic modes in a crystal are the Goldstone bosons that result from the breaking of the symmetry of continuous and rotational translations of the liquid.

This post draws on material from the first chapter of Crystallography: A Very Short Introduction, by A.M. Glazer.

Friday, May 31, 2019

Max Weber on the evolution of institutions

Max Weber is one of the founders of sociology. This post is about two separate and interesting things I recently learned about him.

A while ago I discussed Different phases of growth and change in human organisations, based on a classic article from Harvard Business Review. [Which had no references or data!]
My friend Charles Ringma recently brought to my attention somewhat related ideas from Max Weber.
According to Wikipedia

Weber distinguished three ideal types of political leadership (alternatively referred to as three types of domination, legitimisation or authority):[52][111]
  1. charismatic domination (familial and religious),
  2. traditional domination (patriarchspatrimonialismfeudalism) and
  3. legal domination (modern law and state, bureaucracy).[112]
In his view, every historical relation between rulers and ruled contained such elements and they can be analysed on the basis of this tripartite distinction.[113] He notes that the instability of charismatic authority forces it to "routinise" into a more structured form of authority.[79]

I also learnt that Weber had a long history of mental health problems. According to Wikipedia

In 1897 Max Weber Sr. died two months after a severe quarrel with his son that was never resolved.[7][37] After this, Weber became increasingly prone to depression, nervousness and insomnia, making it difficult for him to fulfill his duties as a professor.[17][26] His condition forced him to reduce his teaching and eventually leave his course unfinished in the autumn of 1899. After spending months in a sanatorium during the summer and autumn of 1900, Weber and his wife travelled to Italy at the end of the year and did not return to Heidelberg until April 1902. He would again withdraw from teaching in 1903 and not return to it till 1919. Weber's ordeal with mental illness was carefully described in a personal chronology that was destroyed by his wife. This chronicle was supposedly destroyed because Marianne Weber feared that Max Weber's work would be discredited by the Nazis if his experience with mental illness were widely known.[7][38]

This puts Weber in a similar class to many other distinguished scholars who had significant mental health problems: Boltzmann, John Nash, Drude, Michel Foucault, ...

Tuesday, May 28, 2019

Spin-crossover in geophysics

Most of my posts on spin-crossover materials have been concerned with organometallic compounds. However, this phenomena can also occur in inorganic materials. Furthermore, it may be particularly relevant in geophysics. A previous post discussed how strong electron correlations may play a role in geomagnetism and DMFT calculations have given some insight.

A nice short overview and introduction is
Electronic spin transition of iron in the Earth's deep mantle 
Jung‐Fu Lin Steven D. Jacobsen Renata M. Wentzcovitch

[It contains the figure below]
The main material of interest is magnesiowüstite, an alloy of magnesium and iron oxide,
(Mg1xFex)O




Experimental studies and DFT calculations suggest that as the pressure increases the iron ions undergo a transition from high spin to low spin. The basic physics is that the pressure reduces the Fe-O bond lengths which increases the crystal field splitting.
In geophysics, the pressure increases as one goes further underground.

DFT+U calculations are reported in
Spin Transition in Magnesiowüstite in Earth’s Lower Mantle 
Taku Tsuchiya, Renata M. Wentzcovitch, Cesar R. S. da Silva, and Stefano de Gironcoli

The main result is summarised in the figure below.
There is a smooth crossover from high spin to slow spin, as is observed experimentally. However, it should be pointed out that this smoothness (versus a first-order phase transition with hysteresis) is built into the calculation (i.e. assumed) since the low spin fraction n is calculated using a single site model.  On the other hand, the interaction between spins may be weak because this is a relatively dilute alloy of iron (x=0.1875).
Also, the vibrational entropy change associated with the transition is not included. In organometallics, this can have a significant quantitative effect on the transition.

The elastic constants undergo a significant change with the transition. This is important for geophysics because these changes affect phenomena such as the transmission of earthquakes.

Abnormal Elasticity of Single-Crystal Magnesiosiderite across the Spin Transition in Earth’s Lower Mantle 
Suyu Fu, Jing Yang, and Jung-Fu Lin


A previous post considered changes in the elasticity and phonons in organometallic spin-crossover. Unfortunately, that work did not have the ability to resolve different elastic constants.

Friday, May 24, 2019

Is this an enlightened use of metrics?

Alternative title: An exciting alternative career for Ph.Ds in condensed matter theory!

There is a fascinating long article in The New York Times Magazine
How Data (and Some Breathtaking Soccer) Brought Liverpool to the Cusp of Glory 
The club is finishing a phenomenal season — thanks in part to an unrivaled reliance on analytics.

This is in the tradition of Moneyball. Most of the data analytics team at Liverpool have physics Ph.Ds. It is led by Ian Graham who completed a Ph.D. on polymer theory at Cambridge.

On the one hand, I loved the article because my son and I are big Liverpool fans. We watch all the games, some in the middle of the night. On the other hand, I was a bit surprised that I liked the article since I am a strong critic of the use of metrics in most contexts, especially in the evaluation of scientists and institutions. However, I came to realise that, in many ways, what Liverpool is doing is not the blind use of metrics but rather using data as just one factor in making decisions.
Here are some of the reasons why this is so different from what now happens in universities.

1. The football manager (Jurgen Klopp, who has played and managed) is making the decisions, not someone who has never played or has had limited success with playing and managing (a board member or owner).

2. The data is just one factor in hiring decisions. For example, Klopp often spends a whole day with a possible new player to see what their personal chemistry is. Furthermore, he has watched them play (the equivalent of actually reading the papers of a scientist?).

3. A single metric (cf. goals scored, h-index, impact factor) is not being used to make a decision on who to recruit. Rather, many metrics are being used, to develop a complete picture. Furthermore, a major emphasis of the Moneyball approach is finding ``diamonds in the rough'', i.e. players who have unseen potential, because their unique gifts are being overlooked (because they are currently undervalued because they score poorly with conventional metrics) or they would be a potent combination with other current plays. The latter was a decision is recruiting Salah; the data suggested he would be a particularly powerful partner to Firmino. On the former, the article discusses in detail the analysis that led to Liverpool recruiting the Ghanian midfield,  Naby Keita.
Keita’s pass completion rate tends to be lower than that of some other elite midfielders. Graham’s figures, however, showed that Keita often tried passes that, if completed, would get the ball to a teammate in a position where he had a better than average chance of scoring. What scouts saw when they watched Keita was a versatile midfielder. What Graham saw on his laptop was a phenomenon. Here was someone continually working to move the ball into more advantageous positions, something even an attentive spectator probably wouldn’t notice unless told to look for it. Beginning in 2016, Graham recommended that Liverpool try to get him.


What might be an analogue of this approach in science?
A person who does not attract a lot of attention but has a record of writing papers that stimulate or are foundational to significant papers of better-known scientists?
A person who does very good science even though they have few resources?
A person who is particularly good at putting together collaborations?

Other suggestions?

Tuesday, May 21, 2019

Public talk on emergence

Every year in Australia there is a week of science outreach events in pubs, Pint of Science. I am giving a talk  tomorrow night, Emergence: from physics to sociology.
Here are the slides.

In the past, when explaining emergence I have liked to use the example of geometry. However, one can argue that a limitation of that case is there are not necessary many interacting components to the system. Hence, I think the example of language, discussed by Michael Polanyi is better.



Saturday, May 18, 2019

Phonons in organic molecular crystals.

In any crystal the elementary excitations of the lattice are phonons. The dispersion relation for these quasi-particles relates their energy and momentum. This dispersion relation determines thermodynamic properties such as the temperature dependence of the specific heat and plays a significant role in electron-phonon scattering and superconductivity in elemental superconductors. A nice introduction is in chapter 13 of Marder's excellent text. [The first two figures below are taken from there].

The dispersion relation is usually determined in at least one of three different ways.

1. The classical mechanics of balls and harmonic springs, representing atoms and chemical bonds, respectively. One introduces empirical parameters for the strengths of the bonds (spring constants).

2. First-principles electronic structure calculations, often based on density functional theory (DFT). This actually just determines the spring constants in the classical model.

3. Inelastic neutron scattering.

The figure below shows the dispersion relations for a diamond lattice using parameters relevant to silicon, using method 1. I find it impressive that this complexity is produced with only two parameters.

Furthermore, it produces most of the details seen in the dispersion determined by method 3. (Squares in the figure below.) which compare nicely with method 2. (solid lines below).

What about organic molecular crystals?
The following paper may be a benchmark.

Phonon dispersion in d8-naphthalene crystal at 6K 
I Natkaniec, E L Bokhenkov, B Dorner, J Kalus, G A Mackenzie, G S Pawley, U Schmelzer and E F Sheka

The authors note that method 3. is particulary challenging for three reasons.
  • The difficulties in growing suitable single-crystal samples. 
  • The high energy resolution necessary to observe the large number of dispersion curves (in principle there are 3NM modes, where N is the number of atoms per molecule and M is the number of molecules per unit cell). 
  • The high momentum resolution necessary to investigate the small Brillouin zone (due to the large dimensions of the unit cell).
The figure below shows their experimental data for the dispersions. The solid lines are just guides to the eye.

The authors also compare their results to method 1. However, the results are not that impressive, partly because it is much harder to parameterise the intermolecular forces, which are a mixture of van der Waals and pi-pi stacking interactions. Hence, crystal structure prediction is a major challenge.

A recent paper uses method 2. and compares the results of three different DFT exchange-correlation functionals to the neutron scattering data above.
Ab initio phonon dispersion in crystalline naphthalene using van der Waals density functionals
Florian Brown-Altvater, Tonatiuh Rangel, and Jeffrey B. Neaton


What I would really like to see is calculations and data for spin-crossover compounds.

Thursday, May 16, 2019

Introducing phase transitions to a layperson

I have written a first draft of a chapter introducing phase diagrams and phase transitions to a layperson. I welcome any comments and suggestions. Feel free to try it out on your aunt or uncle!

Tuesday, May 7, 2019

Fun facts about phonons

Today we just take it for granted that crystals are composed of periodic arrays of interacting atoms. However, that was only established definitively one hundred years ago.
I have been brushing up on phonons with Marder's nice textbook, Condensed Matter Physics.
There are two historical perspectives that I found particularly fascinating. Both involve Max Born.

In a solid the elastic constants completely define the speeds of sound (and the associated linear dispersion relationship). In a solid of cubic symmetry, there are only three independent elastic constants, C_11, C_44, and C_12.
Cauchy and Saint Venant showed that if all the atoms in a crystal interact through pair-wise central forces then C_44=C_12. However, in a wide range of elemental crystals, one finds that C_12 is 1-3 times larger than C_44. This discrepancy caused significant debate in the 19th century but was resolved in 1914 by Born who showed that angular forces between atoms could explain the violation of this identity. From a quantum chemical perspective, these angular forces arise because it costs energy to bend chemical bonds.

The first paper on the dynamics of a crystal lattice was by Born and von Karman in 1912. This preceded the famous x-ray diffraction experiment of von Laue that established the underlying crystal lattice. In 1965, Born reflected
The first paper by Karman and myself was published before Laue's discovery. We regarded the existence of lattices as evident not only because we knew the group theory of lattices as given by Schoenflies and Fedorov which explained the geometrical features of crystals, but also because a short time before Erwin Madelung in Göttingen had derived the first dynamical inference from lattice theory, a relation between the infra-red vibration frequency of a crystal and its elastic properties.... 
Von Laue's paper on X-ray diffraction which gave direct evidence of the lattice structure appeared between our first and second paper. Now it is remarkable that in our second paper there is also no reference to von Laue. I can explain this only by assuming that the concept of the lattice seemed to us so well established that we regarded von Laue's work as a welcome confirmation but not as a new and exciting discovery which it really was.
This raises interesting questions in the philosophy of science. How much direct evidence do you need before you believe something? I can think of two similar examples from more recent history: the observation of the Higgs boson and gravitational waves. Both were exciting, and rightly earned Nobel Prizes.
However, many of us were not particularly surprised.
The existence of the Higgs boson made sense because it was a necessary feature of the standard model, which can explain so much.
Gravitational waves were a logical consequence of Einstein's theory of general relativity, which had been confirmed in many different ways. Furthermore, gravitational waves were observed indirectly through the decay of the orbital period of binary pulsars.

Wednesday, May 1, 2019

Emergence: from physics to international relations

Today I am giving a seminar for the School of Political Science and International Studies at UQ.
Here are the slides.


Thursday, April 25, 2019

Modelling the emergence of political revolutions

When do revolutions happen? What are the necessary conditions?
Here are the claims of two influential political theorists.

``a single spark can cause a prairie fire’’
Mao Tse Tung

 “it is not always when things are going from bad to worse that revolutions break out,... On the contrary, it often happens that when a people that have put up with an oppressive rule over a long period without protest suddenly finds the government relaxing its pressure, it takes up arms against it. … liberalization is the most difficult of political arts”
Alexis de Tocqueville (1856)

Is it possible to test such claims? What is the relative importance of levels of perceived hardship and government illegitimacy, oppression, penalties for rebellion, police surveillance, ...?

An important paper in 2002 addressed these issues.
Modeling civil violence: An agent-based computational approach 
Joshua M. Epstein

The associated simulation is available in NetLogo.
It exhibits a number of phenomena that can be argued to be emergent: they are a collective and are not necessarily unanticipated from the model.

Tipping points
There are parameter regimes at which there are no outbursts of rebellion.

Free assembly catalyzes rebellious outbursts
Epstein argues that this is only understood ex post facto.

Punctuated equilibrium
Periods of civil peace interspersed with outbursts of rebellion.

Probability distribution of waiting times between outbursts.
This distribution is not build explicitly into the model which involves only uniform probability distributions.
[Terminology here is analogous to biological evolution].

Salami corruption
Legitimacy can fall much further incrementally than it can in one jump, without stimulating large-scale rebellion.
[I presume the origin of Epstein's terminology is that salami is sliced something thinly... Maybe a clearer analogy would be the proverbial frog in a pot of slowly heated water].

de Tocqueville effect
Incremental reductions in repression can lead to large-scale rebellion. This is in contrast to incremental decreases in legitimacy.

Monday, April 22, 2019

Ten years of blogging!

I just realised that last month I had been blogging for ten years.
On the five year anniversary, I reflected on the influence that the blog has had on me.
I don't have much to add to those reflections. The second five years has not been as prolific but has been just as enriching and I am grateful for all the positive feedback and encouragement I have received from readers.

Wednesday, April 17, 2019

The emergence of social segregation

Individuals have many preferences. One is that we tend to like to associate with people who have some commonality with us. The commonality could involve hobbies, political views, language, age, wealth, ethnicity, religion, values, ... But some of us also enjoy a certain amount of diversity, at least in certain areas of life. We also have varying amounts of tolerance for difference.
A common social phenomenon is segregation: groups of people clump together in spatial regions (or internet connectivity) with those similar to them. Examples range from ethnic ghettos and teenage cliques to "echo chambers" on the internet.

The figure below shows ethnic/racial segregation in New York City. It is taken from here.



In 1971 Thomas Schelling published a landmark paper in the social sciences. It surprised many because it showed how small individual preferences for similarity can lead to large scale segregation. The context of his work was how in cities in the USA racially segregated neighbourhoods emerge.

One version of Schelling's model is the following. Take a square lattice and each lattice point can be black, white or vacant. Fix the relative densities of the three quantities and begin with a random initial distribution. A person is "unhappy" if only 2 or less of their 8 neighbours (nearest and next-nearest neighbours) on the lattice are like them. [They have a 25% threshold for moving]. They then move to a nearby vacancy. After many iterations/moves to an equilibrium is reached where everyone is "happy" but there is significant segregation.

The figure is taken from here.

A major conclusion is that motives at the individual level are not the same as the outcomes at the macro level. People may be very tolerant of diversity (e.g. only have a preference that 30 per cent of their neighbours be like them) but collectively this results in them living in very segregated neighbourhoods.

There are several variants of the model that Schellman presented in later papers and in an influential book Micromotives and Macrobehavior, published in 1978. He received the Nobel Prize in Economics in 2005 for work in game theory.

There is a nice simulation of the model in NetLogo. For example, you can see how if you set the individual preference for similarity at 30% one ends up with a local similarity of 70%.
In the Coursera, Model Thinking, Scott Page has a helpful lecture about the model.

This can be considered to be the first agent-based model. It is fascinating that Schellman did not use a computer but rather did his ``simulation'' manually on a checkerboard!

Physicists have considered variants of Schelling's model that can be connected to more familiar lattice models from statistical mechanics, particularly the Ising model. Examples include

Ising, Schelling and self-organising segregation 
D. Stauffer and S. Solomon

Phase diagram of a Schelling segregation model
L. Gauvin, J. Vannimenus, J.-P. Nadal
This connects to classical spin-1 models such as the Blume-Capel model.

A unified framework for Schelling's model of segregation 
Tim Rogers and Alan J McKane

Competition between collective and individual dynamics 
Sébastian Grauwin, Eric Bertin, Rémi Lemoy, and Pablo Jensen

Shelling's model is a nice example of emergence in a social system. A new entity [highly segregated neighbourhoods] emerges in the whole system that was not anticipated based on a knowledge of the properties of the components of the system.