Friday, December 27, 2019

Question your intuitions and preconceptions

 My economist son often listens to the podcast, Conversations with Tyler Cohen. We recently listened to a conversation with Esther Duflo, who shared the 2019 Nobel Prize in Economics. Like most episodes it covers a wide range of territory, from development economics to Indian classical music to parenting. I highly recommend it.

Perhaps, the bit that was most striking for me was the following.
What advice do you give to your talented undergraduates that differs from the advice your colleagues would give them?  
I give almost all of them the advice to take some time off, in particular if they have any interest in development, which is generally the reason why they come to see me in the first place. But even if they don’t really, to spend a year or two in a developing country, working on a project. Not necessarily inner city. Any project spending time in the field.  
It’s only through this exposure that you can learn how wrong most of your intuitions are and preconceptions are. I can tell it to them till they are blue in the face to not let themselves be guided by what seems obvious to them. But until they’ve confronted what they think is obvious to something entirely different, then it’s not clear.
I think this relates to profound differences (cultural and economic and experiential) between rich countries and Majority World countries. Culture is what you assume is normal and unquestioned.

Thursday, December 12, 2019

John Wilkins (1936-2019): condensed matter leader

I was sad to hear last week of the death of John Wilkins. He was a mentor to a whole generation of condensed matter physicists and a generous servant, both individuals and institutions. This obituary and memories from some colleagues gives a nice description of his many contributions.

I was privileged to do a postdoc with Wilkins at Ohio State University in the early 1990s. He had a significant influence on me, both scientifically and professionally. Much of the practical advice I write on this blog relating to jobs, writing, and giving talks, I learned from Wilkins. Even ten years after I worked with him I would still occasionally phone him for advice, particularly with negotiating and deciding on job offers.

Real leadership does not involve having a position, but rather having influence. Servant leaders are not concerned with advancing their own interests, but rather those of others in their community. They do this by investing in people and institutions. Wilkins did this in many ways. He invested heavily in his own graduate students and postdocs. He advised and mentored countless other students, postdocs, and young faculty, for whom he had no formal responsibility or anything to gain from their success. He was proud of the fact that he never held an administrative position in a university. Nevertheless, his influence was far greater than most department chairs and deans. He served the American Physical Society in countless ways, particularly their publishing activities and the Division of Condensed Matter Physics. He wrote innumerable reference letters, referee reports, and grant reviews.

Reflecting on Wilkins, I was reminded of these recent words of David Brooks, written in a different context.
I had a feeling of going back in time. Why did it feel so strange? It was because I was looking at people who are not self-centered. They’ve dedicated themselves to the organization that formed them, and which they serve.
A few other basic but important things I learned from Wilkins:
Write clearly. Rewrite. Talk to people. Theory should relate to real materials and real experiments. Defining the problem clearly can be an important contribution. A concrete calculation on a concrete model is valuable.

Wilkins did have significant scientific achievements, but they tend to get dwarfed in comparison to his influence over people. Perhaps, the most significant relate to the Kondo problem. This began with his student Krishnamurthy, who used Wilson's numerical renormalisation group to understand all the different regimes of the Anderson single impurity model. Later with his students Dan Cox and Gene Bickers, Wilkins applied slave boson techniques to describe a wide range of experimental properties of valence fluctuation associated with magnetic impurities in metals.

In classic Wilkins style, he convened a group of distinguished theorists to meet in Los Alamos one summer to write a definitive early review article on heavy fermions.

Wilkins was larger than life. He laughed a lot and was a tease. He could also be intimidating. Before his groups' annual pilgrimage to the APS March meeting, everyone had to give a practice talk to the group and Wilkins. A fellow postdoc confided to me that each year he was more nervous about giving the practice talk than the real talk! One time, Wilkins got frustrated that too many of us had small fonts on our overhead transparencies. He made us all chant together: ``22 point type is the smallest! 22 point type is the smallest! ...."  again and again until we got the point.

It was well known that Wilkins did not like his picture taken. On his department web page he put a picture of another John Wilkins, one of the founders of the Royal Society. However, my wife did not know his aversion. In 1992? Kevin Ingersent hosted a group Thanksgiving dinner at his house. Later to my shock, I discovered my wife took the photo below. ``What?! You took a photo of Wilkins?!"


Wilkins was a great role model as a scientist, a faculty member, and a servant of a professional community.

Tuesday, December 10, 2019

Mathematics, biology, and emergence

Last night I heard a model public lecture about science. The School of Mathematics and Physics at UQ hosted a public lecture at the Queensland State Library. Holly Krieger, a pure mathematician at Cambridge, spoke on the Mathematics of Life. This is part of a biannual lecture series endowed by Kurt Mahler.

The lecture was amazing, both in content and presentation. It was engaging for high school students, and stimulating for experts. I wish I had a video or a copy of the slides. Krieger is well known to some through her Numberphile videos on YouTube.
Here are a few things I learned in the lecture.

Mathematics is the language of relationships and patterns.

We forget how even the concept of numbers is abstract. The notion of functions even more so.

An underlying theme of the lecture was that of emergence: a simple rule describing the interactions between the components of a system lead to collective behaviour (complexity) of the whole system.

Examples were given from biological systems that raise the question: how does the system know to do this?

Swarms of starlings were shown in the short film, The art of flying by Jan van IJken.
How do they move in concert when there is no leader?



Other examples included ant bridges, an experiment with a slime mould that was able to replicate the Japanese transport network (here is the Canadian version), stripes and spots on animals (pattern formation explained by with coupled reaction-diffusion equations by Alan Turing).

To illustrate how simple rules lead to complex behaviour, several cellular automata were demonstrated starting with Pascal's triangle and Sierpinski triangle. The latter was connected to biology through the pattern on the shell of a (poisonous) cone snail.

Rule 30 produces patterns similar to those found on the shell. It has periodic patterns such as stripes and aperiodic chaotic patterns.
It seems the new Cambridge train station also has this pattern!


Rule 184 can describe traffic including jamming for medium traffic densities.
The occurrence of a traffic jam does not depend on the initial state or a particular car, but only depends on the density of cars and the interaction (rule) between cars.

A nice video was shown of a traffic shockwave.
When water flows from a tap (faucet) and hits a flat sink bottom at right angles it may produce a "hydraulic jump" such as that shown below.



That is just the first half of the lecture. I may blog later about the second half which concerned chaos,
defined as small initial changes leading to significant changes in outcome.

One of the most interesting things for me about the lecture was Krieger's claim that ``Emergent complexity isn't everywhere. It can be hard to detect or confirm.'' i.e. just because we see complex behaviour (patterns) does not mean that it is due to emergence. In question time she said that this was in response to some of Wolfram's grand claims in A New Kind of Science, along the lines that everything (consciousness, gravity, continuity, free will, ...) could be explained in terms of discrete computational models such as cellular automata.

I think a more nuanced view is necessary. I agree, along with many others, that Wolfram's grand claims are not justified. But, I do not equate emergent complexity solely with simple rule-based computational models such as cellular automata. Different people do define emergence differently. For example, Sophia and Steve Kivelson propose the following definition.

An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity.

This would rule out classifying most of the phenomena described in the lecture as emergent. I disagree with this definition. On the other hand, I am not sure I agree with Krieger's claim. I do think almost anything interesting is emergent: consciousness, critical phenomena, the vacuum in quantum field theory, superconductivity, ...

Wednesday, December 4, 2019

A culture of fear in universities?

Following the fall of the Berlin Wall, one incredible revelation was the expansive role of the secret police, vast network of informers, and level of personal surveillance. This was underscored to me in movies such as The Lives of Others, novels such as The Day of the Lie, and a seminar I attended about human rights abuses in Syria.

The survival of totalitarian regimes is facilitated by the regime creating a culture of fear at every level of society and institutions, from factories to families. You do not dare to question or criticise the regime. Even making a joke at work may send you to the gulag.

Over the last decade, I have noticed a cultural shift in universities where there seems to be a culture of fear in many different aspects. A few examples are below.
I should be clear that I am not suggesting that universities today are anything like Syria, China, or the former Soviet Union.
Nevertheless, it is worth reflecting on whether there is a culture of fear and what its implications are for productivity, job satisfaction, and the integrity of the institution.
Some of this fear is created by the hyper-competitive environment. Some results from the lust for power and control of managers.

I won't criticise the new policy just announced by my department chair because I don't want to tick them off before my promotion decision (or request for more lab space, sabbatical request, ...)

I won't ask my supervisor that question because she might think I am dumb.

I won't write that in the paper because Professor X, who may be a referee, won't like it. I need to get this paper in a ``high impact'' journal.

I won't write that on my blog because it may offend potential grant reviewers.

I won't publically criticise the latest crazy scheme of senior management because they may make it difficult for me to get promoted.

If I don't work on the latest fad topic I won't get lots of citations. Then I won't get funding/tenure/job...

If I don't publish in luxury journals I won't get funding/tenure/job...

In my paper, I won't talk about the limitations of my results or techniques because then the paper may not get published.

If I don't engage in hype I won't get funding.

I will do anything my boss wants. If I don't I may not get the superlative letter of reference I need to get my next job.

What do you think?
Is there a culture of fear?
If so, can you think of other examples.

Addendum. I should have made some constructive suggestions. I think that senior faculty have a responsibility to make their research groups safe spaces and environments and to not give way to the climate of fear.

Monday, December 2, 2019

Ising model basics

The Ising model is a paradigm in both statistical mechanics and condensed matter physics. Today for most theorists it is so familiar that some of its historical and conceptual significance is lost.
Previously, I posted about what students can learn from computer simulations of the Ising model.

If you had to talk about the Ising model to an experimental chemist what would you say?
[Last week I had to do this].

The Ising model is the simplest effective model Hamiltonian that can describe a thermodynamic system that undergoes a first-order phase transition and has a phase diagram containing a critical point.

On each site i of a lattice one defines a spin sigma_i= +1 or -1, representing spin up or spin down.

The Hamiltonian H is

J_ij describes the interaction between spins on sites i and j. In the simplest version the interactions are only between nearest neighbours, and have the same value J.
h is the external magnetic field.

If J is positive, the ground state at h=0 is a ferromagnet.
If J is negative, the ground state at h=0 is an anti-ferromagnet for a bipartite lattice.

[Caution: just like for the Heisenberg model, some authors define the Hamiltonian with the opposite sign of J].

For h=0 there is a critical point at a finite temperature Tc, for lattices of dimension two and higher.

The spins sigma_i= +/- 1 defined at each lattice site i, were originally to represent the atomic magnetic moments in a ferromagnetic material. However, the sigma's can represent any two states of the site i. For example, the ``spin'' or pseudo-spin can represent the presence or absence of an atom or molecule in a ``lattice gas'', atom A or atom B in a binary alloy (mixture), or the low-spin and high-spin states in a spin-crossover material.

The mean-field theory of the Ising model is mathematically equivalent to the thermodynamic theory of binary mixtures with an entropy of an ideal mixture.
There is a nice discussion of such mixtures in Section 5.4 [and the associated problems] of Introduction to Thermal Physics by Schroeder.
[Here are the slides for a lecture I have given based on that text].
Chapter 15 of the text by Dill and Bromberg is also helpful as it has more detail.
Neither text makes an explicit connection to the Ising model. Following this paper on alloys, one has

This is shown in Section 8.1.2 of James Sethna's text, Statistical MechanicsEntropy, Order Parameters and Complexity.

When interactions beyond nearest-neighbours are included in the Ising model or when the lattice is frustrated (e.g. fcc or triangular) a richer phase diagram is possible. Examples include the ANNNI model and some models for spin-state ice considered by Jace Cruddas and Ben Powell.

Monday, November 25, 2019

Mental health matters

My mental health this year has been up and down. It is not particularly clear why I have struggled at times, given the sources of stress were not particularly bad. Thankfully, now I am the best I have been all year. This may be because I have been quite proactive in taking action. First, there are the basics: adequate sleep, downtime, exercise, and diet. At one point I also cut out all caffeine and alcohol. I also went to the psychologist several times, did more mindfulness exercises, and increased my medication, in consultation with my doctor.

This experience underscores some of the complexities and associated poor understanding of both mental illness and healing. There are biomedical, psychological, social, and spiritual dimensions. There is a high causal density, just like in public policy. Why did I get worse? Why did I get better? As a patient, I don't want to do a series of clinical trials on myself and just change one variable, one after the other. It is better to attack the problem by doing a lot of things that are generally believed to help.

Several people have brought to my attention a series of recent articles in Nature about the mental health of Ph.D. students. These include the following.

Nature’s survey of more than 6,000 graduate students reveals the turbulent nature of doctoral research. 
This stimulated an Editorial,
The mental health of PhD researchers demands urgent attention 
``Anxiety and depression in graduate students is worsening. The health of the next generation of researchers needs systemic change to research cultures.''

Both articles are worth reading, but depressing.

On the one hand, I think it is wonderful Nature is publicising the issue. On the other hand, to me, it is a case of corporate well-washing: where companies pass off responsibility for a problem they have helped create onto their employees or customers. Universities do similar things. If I was asked to name a for-profit company that has a negative influence on ``research culture'' over the past two decades it would be Nature Publishing Group, hands down!

Monday, November 18, 2019

Was Landau the first condensed matter theorist?

Expert readers: please note this post is written for the general audience of a Very Short Introduction. General comments welcome.

Condensed matter physics is not just defined by the objects it studies: condensed states of matter. Rather, the field is also defined by a particular approach. The focus is on finding unifying concepts and organizing principles to address fundamental questions concerning a wide range of phenomena in materials that are chemically and structurally diverse. This approach means looking at the different scales (length, time, and energy) associated with phenomena. In particular, CMP often looks at scales intermediate between the macroscopic and atomic scales. I argued before, that in this sense Kamerlingh Onnes was the first condensed matter experimentalist. In a similar sense, Lev Landau (1908 - 1968) is arguably the first condensed matter theorist, with three papers that he published in 1937, marking the beginning of theoretical CMP.

Landau lived in the Soviet Union and his 1937 papers were almost his last because in 1938 he was arrested for comparing Stalinism to Nazism. The Institute Director, Pyotr Kapitsa, personally wrote to Stalin to no avail. After a year Kapitsa wrote to Molotov (of cocktail fame!), then the nominal head of government, arguing that Landau was indispensable to ``clear up one of the most puzzling areas in modern physics."  In the year following his release Landau developed a theory to explain many of the experiment results on superfluid helium that Kapitsa had obtained. (A nice thank you present!) Landau made notable contributions in all areas of theoretical physics, not just condensed matter. Wikipedia lists more than twenty separate entries describing results, equations, or phenomenon that bear Landau's name. With his former student, Evgeny Lifshitz, Landau co-authored a classic nine-volume series, Course in Theoretical Physics, that is still a standard reference today. Landau also founded a School of Theoretical Physics that produced a plethora of distinguished theoretical physicists. Tragically, Landau’s scientific career ended after a terrible car accident when he was fifty-two years old. He died six years later from injuries associated with the accident. In 1962, Landau was awarded the Nobel Prize in Physics for his work on the theory of superfluidity.


                                              Landau and Kapitsa in 1948.

Landau’s first 1937 paper was concerned with developing the simplest possible theory that could describe the properties of a material near a critical point in the phase diagram, such as associated with a liquid-gas transition or a ferromagnet. A key assumption was that most of the microscopic details, such as the chemical composition of the material, don’t matter much. Landau introduced an order parameter to quantify the amount of ordering present and the symmetry of the ordering. The order parameter varies with temperature and other external parameters such as pressure or magnetic field. It is only non-zero in the ordered state. Landau wrote down the simplest form for the (free) energy of the system as a function of the order parameter. It turns out that symmetry significantly constrains the possible forms for this function. Furthermore, the form is qualitatively different for temperatures above and below the critical temperature. From this simple theory, Landau obtained results for how the order parameter varies with temperature and how there should be a jump (discontinuity) in the specific heat at the critical temperature. What was particularly important was the idea of universality: that most of the microscopic details did not matter and that a wide range of materials and states of matter should have similar properties. Furthermore, the ideas in this paper were foundational for the important ideas about the critical point.

A significant achievement of Landau’s approach to phase transitions was that in 1950, together with Vitaly Ginsburg (1916 - 2009), Landau proposed a theory that could describe many of the properties of superconductors, including how they behaved in the presence of a magnetic field and in thin films. For this work, Ginsburg shared the Nobel Prize in Physics in 2003. Although the Ginzburg-Landau theory could explain a wide range of superconducting phenomena, it left many questions unanswered, including the actual nature and origin of the ordering associated with the superconducting state.

The Ginzburg-Landau theory suggested that the relevant symmetry was a particular symmetry associated with electromagnetism: gauge symmetry. This is a rather abstract concept, but one can give a simple example that may help. With regard to electricity we are familiar with voltage: for example, a 9-volt battery, or a 240-volt appliance. The voltage refers to the electric potential energy; the larger the voltage the stronger the electrical driving force. Voltages are all relative, i.e., they are defined relative to some reference. What is physical is differences in voltage. This is similar to how gravitational potential energy (or elevation) is always defined relative to some reference height, e.g. the floor of the room, sea level, the center of the earth. There is also a gauge symmetry associated with magnetic fields and quantum theory but these are both more complicated and abstract. Later I will discuss experimental manifestations of this breaking of gauge symmetry.

[I am mindful that there are many subtleties about what the ordering and the broken symmetry actually are. For example, this is a breaking of a global gauge symmetry not a local one (which is not allowed by Elitzur's theorem). However, such subtleties are beyond a general audience].

Do you agree that in the sense I discuss Landau was the first condensed matter theorist?
Perhaps it should be van der Waals?

Any corrections?

Any suggestions on how to make this more accessible to a general audience?

Wednesday, November 13, 2019

Deciding what to do after the thesis

Finishing a thesis (honours, masters, or PhD) can be exhausting: physically, emotionally, and intellectually. When you finally submit it, the last thing you want to do is look at it again or reflect on the experience. Unfortunately, many students do not have a break and soon they are caught up in a job search or starting a new job. Furthermore, it is easy for students to default to the academic track: Masters, PhD, postdoc1, postdoc2, ....

I have posted before about how the privileged few who get tenure may not make the most of transitions within an academic career. Here the focus is on students.

After a well-earned break, it is worth reflecting on the following questions, particularly before deciding what you might do next and how to make that a positive experience.

What are some things you enjoyed? did not enjoy?

What do you think you did well? not well?

What did you learn about yourself, particularly your strengths and weaknesses?

What did you learn about those you worked with: advisors, collaborators, other students, research group members?

If you had your time over again what would you do differently?

Given your answers to all of the above, what does this suggest are some good (bad) options for the future?

Monday, November 11, 2019

Tuning the dimensionality of spin-crossover compounds

An important question concerning spin-crossover compounds concerns the origin and the magnitude of the interactions between the individual molecular units.

There is a nice paper
Evolution of cooperativity in the spin transition of an iron(II) complex on a graphite surface
Lalminthang Kipgen, Matthias Bernien, Sascha Ossinger, Fabian Nickel, Andrew J. Britton, Lucas M. Arruda, Holger Naggert, Chen Luo, Christian Lotze, Hanjo Ryll, Florin Radu, Enrico Schierle, Eugen Weschke, Felix Tuczek, and Wolfgang Kuch

An impressive achievement is the control of the number of monolayers (ML) of SCO molecules deposited on a highly oriented surface pyrolytic graphite. The coverage varies between 0.35 and 10 ML. The shape of the spin-crossover curve changes significantly as the number of monolayers varies, as shown in the upper panel below.

The natural interpretation is that as the number of monolayers increases the interaction between molecules (co-operativity) increases. This can be quantified in terms of the parameter Gamma in the Slichter-Drickamer model [which is equivalent to a mean-field treatment of an Ising model], with Gamma = 4 z J where z=number of nearest-neighbours and J=Ising interaction.
The blue curve in the lower panel shows the variation of Gamma with ML.

The figure above and Table 1 shows that for ML=0.35, Gamma=-0.44 kJ/mol is almost zero for ML=0.7, and then monotonically increases to 2.1 kJ/mol for the bulk.

Does that make sense?

The magnitude of the Gamma values is comparable to those found in other compounds.

The negative value of Gamma for ML=0.35 might be explained as follows. Suppose a monolayer consists of SCO molecules arranged in a square lattice. Then ML=0.33 will consist of chains of SCO molecules that interact in the diagonal direction. If the J_nnn for this next-nearest neighbour interaction is negative then the Gamma value will be negative.

For a monolayer on a square lattice, Gamma= 16 (J_nn + J_nnn). J_nn will be positive and so if it is comparable in magnitude to J_nnn then Gamma will be small for a monolayer.

For a bilayer, Gamma = 16 (J_nn + J_nnn) + 4 J_perp, where J_perp is the interlayer coupling.
For the bulk, Gamma = 16 (J_nn + J_nnn) + 8 J_perp.

This qualitatively explains the trends, but not quantitatively.

The authors also note that the values of Delta E and Delta S obtained from their data vary little with the coverage, as they should since these parameters are single-molecule properties. This also means that the crossover temperature, T_sco also varies little with coverage.

A more rigorous approach is to not use mean-field theory, but rather consider a slab of layers of Ising models. The ratio of the transition temperature T_c to J_nn increases from 2.27 for a single layer to 4.5 as the dimensionality increases from d=2 to d=3.
[In contrast, for mean-field theory the ratio increases from 4 to 6].

If the crossover temperature T_sco is larger than T_c, [as it must be if there is no hysteresis] and assuming J_nn does not change with coverage, then as the coverage increases the crossover temperature becomes closer to the critical temperature and the transition curve will become steeper, reflected in a smaller transition width Delta T (and a correspondingly larger effective Gamma in the Slichter-Drikamer fit). This claim can be understood by looking at the last Figure in this post.

Thursday, November 7, 2019

Oral exams need not be like a visit to the dentist

Oral exams (vivas) are quite common for most postgraduate degrees involving research. The basic goal is to provide an efficient mechanism for the examiners to determine a student's level of understanding of what they have done. Most committees comprise both experts and non-experts. Most are actually quite friendly. If the non-experts learn something new they will be happy. Sometimes an examiner may ``grill'' a student simply because they want to understand what is going on. I think the main reason thinks occasionally get tense is when there is a member of the committee who has a poor relationship with the student's advisor or doesn't think much of their research.

To prepare take any opportunity to attend another student's oral exam or ask them about what questions  they were asked and tips.

Some common mistakes that students make are to assume:

Everyone on the committee has read the thesis in detail.

The committee is going to ask highly technical and nuanced questions.

Committee members don't appreciate that I am nervous.

If I can't answer a question it is a disaster.

I should put a positive spin on everything I have done.

Many of the questions asked are usually along the lines of the following.

What is the most important result that you obtained?

How is this work original?

What is the biggest weakness of your approach?

What direction do would you suggest for a student who wishes to build on your work?

What are your plans to publish this work?

Any other suggested advice?

Saturday, November 2, 2019

Academic publishing in Majority World

I was asked for an update on this. The challenges are formidable, but not insurmountable.
Here are slides from a talk on the subject.
As always, it is important not to reinvent the wheel.
There are already some excellent resources and organisations. 

A relevant organisation is AuthorAID which is related to inasp, and has online courses on writing. People I know who have taken these courses, or acted as mentors, speak highly of them.

Authors should also make use of software to correct English such as Grammarly.

Publishing Scientific Papers in the Developing World is a helpful book, stemming from a 2010 conference.
Erik Thulstrup has a nice chapter "How should a Young Researcher Write and Publish a Good Research Paper?"

Friday, November 1, 2019

The central role of symmetry in condensed matter

I have now finished my first draft of chapter 3, of Condensed Matter Physics: A Very Short Introduction. 


I welcome comments and suggestions. However, bear in mind my target audience is not the typical reader of this blog, but rather your non-physicist friends and family. 
I think it still needs a lot of work. The goal is for it to be interesting, accessible, and bring out the excitement and importance of condensed matter physics.

This is quite hard work, particularly to try and explain things in an accessible manner.
I am also learning a lot.

I have a couple of basic questions.

How is the symmetry of the rectangular lattice and the centred lattice different?

When was the crystal structure of ice determined by X-ray diffraction?
[Pauling proposed the structure in 1935.]


Thursday, October 24, 2019

Many-worlds cannot explain fine tuning

There are several independent lines of argument that are used to support the idea of a multiverse: the many-worlds interpretation of quantum mechanics, the ``landscape problem'' in string theory, and the fine-tuning of fundamental physical constants. Previously, I wrote about four distinct responses to the fine-tuning of the cosmological constant.

I was recently trying to explain the above to a group of non-physicists. One of them [Joanna] had the following objection that I had not heard before. Schrodinger's cat can only exist in one universe within the multiverse. The multiverse involves zillions of universes. However, because of fine-tuning carbon-based life is so improbable that it can only exist in one (or maybe a handful?) of the universes, within the multiverse. Thus, when one observes whether the cat is dead or alive, and the universe ``branches" into two distinct universes, one with a dead cat and the other with a living cat, there is a problem. It is possible that many-worlds interpretation is still correct, but it does not seem possible to claim that many-worlds and the multiverse needed to ``explain'' fine-tuning are the same type of multiverse.

One response might be that Schrodinger's cat is just a silly extrapolation of entanglement to the macroscopic scale. However, the problem remains. Just consider radioactive decay of atoms. Each decay of a single atom should be associated with branching to two distinct universes. Both of those universes are identical, except for whether that single atom has decayed or not. Over history, zillions of radioactive decays have occurred. This means that there are zillions of universes almost identical to the one we live in right now. But, all these zillion universes are fine-tuned to be just like ours.

Is there a problem with this argument?

Addendum. (25 October, 2019).
Fine-tuning got a lot of attention after the 1979 Nature paper, The anthropic principle and the structure of the physical world, by Bernard Carr and Martin Rees.
The end of the article makes explicit the connection with the many-worlds interpretation of quantum theory.
...nature does exhibit remarkable coin- cidences and these do warrant some explanation.... the anthropic explanation is the only candidate and the discovery of every extra anthropic coincidence increases the post hoc evidence for it. The concept would be more palatable if it could be given a more physical foundation. Such a foundation may already exist in the Everett 'many worlds' interpretation of quantum mechanics, according to which, at each observation, the Universe branches into a number of parallel universes, each corresponding to a possible outcome of the observation. The Everett picture is entirely consistent with conventional quantum mechanics; it merely bestows on it a more philosophically satisfying interpretation. There may already be room for the anthropic principle in this picture. 
Wheeler envisages and infinite ensemble of universes all with different coupling constants and so on. Most are 'still-born', in that the prevailing physical laws do not allow anything interesting to happen in them; only those which start off with the right constants can ever become 'aware of themselves'. One would have achieved something if one could show that any cognisable universe had to possess some features in common witti our Universe. Such an ensemble of universes could exist in the same sort of space as the Everett picture invokes. Alternatively, an observer may be required to 'collapse' the wave function. These arguments go a little way towards giving the anthropic principle the status of a physical theory but only a little: it may never aspire to being much more than a philosophical curiosity...
In a review of a book based on a conference about the multiverse, Virginia Trimble states that:
There is also among the authors strong divergence of opinion on whether Hugh Everett's version of many worlds is (just) a quantum multiverse (Tegmark), almost certainly correct and meaningful (Page), or almost certainly wrong or meaningless (Carter). 

Tuesday, October 8, 2019

2019 Nobel Predictions

It is that time of year again. I have not made predictions for a few years.

For physics this year I predict
Experiments for testing Bell inequalities and elucidating the role of entanglement in quantum physics
Alan Aspect, John Clauser, and Anton Zeilinger
They received the Wolf Prize in 2010, a common precursor to the Nobel.

My personal preference for the next Nobel for CMP would be centred around Kondo physics, since that is such a paradigm for many-body physics, maybe even comparable to BCS.

Kondo effect and heavy fermions
Jun Kondo, Frank Steglich, David Goldhaber-Gordon

Arguably the latter two might be replaced with others who worked on heavy fermions and/or Kondo in quantum dots.
Steglich discovered heavy fermion superconductivity.
Goldhaber-Gordon realised tuneable Kondo and Anderson models in quantum dots (single-electron transistors).

Unlike many, I still remain to be convinced that topological insulators is worthy of a Nobel.

For chemistry, my knowledge is more limited. However, I would go for yet another condensed matter physicist to win the chemistry prize: John Goodenough, inventor of the lithium battery.
He also made seminal contributions to magnetism, random access memories, and strongly correlated electron materials.

What do you think?

Postscripts (October 10).

I got confused about the day of the physics prize and I think when I posted my ``prediction'' the prize may have already been announced.

A few years ago I read Goodenough's fascinating autobiography. It was actually in that book that I learned about U. Chicago requiring PhD students to publish a single author paper. This observation featured in my much commented on recent post about PhD theses.

I also have a prediction for the Peace Prize. First, I hope it is not Greta Thunberg, as much as I admire her and agree with the importance of her cause. I worry whether it may ruin her life.
My wife suggested the Prime Minister of Ethiopia, Abiy Ahmed and the President of Eritrea, Isaias Afwerki. I find it truly amazing what Ahmed has achieved.
Another great choice would be some of the leaders of Armenia, which has seen significant increases in human rights, political freedoms, and freedom the press. It was selected as The Economist's country of the year in 2018.

Postscript (October 30).
I was really happy about the economics prize. Six years ago, I read Poor Economics, by Banerjee and Duflo, with my son (an economics student), and blogged about it. Below a respond to a commenter who was critical of this prize.

Estimating the Ising interaction in spin-crossover compounds

I previously discussed how one of the simplest model effective Hamiltonians that can describe many physical properties of spin-crossover compounds is an Ising model in an external "field". The s_i=+/-1 is a pseudo-spin denoting the low-spin (LS) and high-spin (HS) states of a transition metal molecular complex at site i.
The ``external field" is one half of the Gibbs free energy difference between the LS and HS states. The physical origin of the J interaction is ``believed to be'' elastic, not magnetic interactions. A short and helpful review of the literature is by Pavlik and Boca.

Important questions are:

1. What is a realistic model that can explain how J arises due to elastic interactions?
2. How does one calculate J from quantum chemistry calculations?
3. How does one estimate J for a specific material from experimental data?
4. What are typical values of J?

I will focus on the last two questions.
One can do a mean-field treatment of the Ising model, leading to a model free energy for the whole system that has the same form as that of an ideal binary mixture of two fluids where
x = (1 + av(s_i))/2, is the relative fraction of low spins. 
This model free energy was proposed in 1972 by Slichter and Drickmamer.
The free energy of interaction between the two "fluids" is of the form -Gamma x^2.
Gamma is often referred to as the ``co-operativity" parameter.
Minimising the free energy versus x gives a self-consistent equation for x(T).
This can be compared to experimental data for x vs T, e.g. from the magnetic susceptibility, and a Gamma value extracted for a specific material.

Values for Gamma obtained in this way for a wide-range of quasi-one-dimensional materials [with covalent bonding (i.e. strong elastic interactions) between spin centres] are given in Tables 1 and 2 of Roubeau et al. The values of Gamma are in the range 2-10 kJ/mol. In temperature units this corresponds to 240-1200 K.

My calculations [which may be wrong] give that Gamma = 4 J z, where z is the number of nearest neighbours in the Ising model. This means that (for a 1d chain with z=2) that J is in the range of 0.3-1.5 kJ/mol, or 30-150 K.

In many spin-crossover materials, the elastic interactions are via van der Waals, hydrogen bonding, or pi-stacking interactions. In that case, we would expect smaller values of J.
This is consistent with the following.
An analysis of a family of alloys by Jakobi et al. leads to a value of Gamma of 2 kJ/mol.
[See equation 9b. Note B=Gamma=150 cm^-1.  Also in this paper x is actually denoted gamma and x denotes the fraction of Zn in the material.].

I thank members of the UQ SCO group for all they are teaching me and the questions they keep asking.

Tuesday, October 1, 2019

Marks of an excellent PhD thesis

As years go by the PhD thesis in science and engineering is less and less of a ``thesis'' and more just a box to tick. There was a time when the thesis was largely the work of the student and tackled one serious problem. Decades ago at the University of Chicago, students were meant to write a single author paper that was based on their thesis.
At some universities, including my own, students can now staple several papers together, write an introductory chapter, and submit that as a thesis. One obvious problem with that system is the question of how large was the contribution of the student multi-author papers, both in terms of the writing and doing the experiments or calculations.

Previously I have argued that A PhD is more than a thesis, a PhD should involve scholarship, and a thesis should suggest future directions and be self-critical. In some sense these posts were negative, focusing on what may be missing. Here I just want to highlight several positive things I recently saw in a thesis.

A coherent story
The thesis should be largely about one thing looked at from several angles. It should not be ``several random topics that my advisor got excited about in the past 3 years.''

Meticulous detail
This should cover existing literature. More importantly, there should be enough detail that the next student can use the thesis as a reference to learn all the background to take the topic further.

Significant contributions from the student
A colleague once said that a student is ready to submit the thesis when they know more about the thesis topic than their advisor.

The situation in the humanities is quite different. Students largely work on their own and write a thesis that they hope will eventually become a book.

I think the decline of the thesis reflects a significant shift in the values of the university as a result of neoliberalism. The purpose of PhDs is no longer the education of the student, but rather to have low-paid research assistants for faculty to produce papers in luxury journals that will attract research income and boost university rankings.

What do you think are the marks of an excellent PhD thesis?

Thursday, September 26, 2019

Symmetry is the origin of all interactions

In Phil Anderson's review of Lucifer's Legacy: The Meaning of Asymmetry by Frank Close, Anderson makes the following profound and cryptic comment.
In a book focusing, as this does, on symmetry, it seems misleading not to explain the fundamental principle that all interaction follows from symmetry: the gauge principle of London and Weyl, modelled on and foreshadowed by Einstein's derivation of gravity from general relativity (Einstein seems to be at the root of everything). The beautiful idea that every continuous symmetry implies a conservation law, and an accompanying interaction between the conserved charges, determines the structure of all of the interactions of physics. It is not appropriate to try to approach advanced topics such as electroweak unification and supersymmetry without this foundation block.
To see how this plays out in electrodynamics see here.

Tuesday, September 24, 2019

A pioneering condensed matter physicist

In terms of institutional structures, Condensed Matter Physics did not really exist until the 1970s. A landmark being when the Division of Solid State Physics of the American Physical Society changed its name. On the other hand, long before that people were clearly doing CMP! If we think of CMP as a unified approach to studying different states of matter that enterprise began in earnest during the twentieth century.

Kamerlingh Onnes (1853-1924) was a pioneer in low-temperature physics but is best known for the discovery of superconductivity in 1911. In many ways, Onnes embodied the beginning of an integrated and multi-faceted approach to CMP: development of experimental techniques, the interaction of theory and experiment, and addressing fundamental questions.

1. Onnes played the long game, spending years developing and improving experimental methods and techniques, whether glass blowing, sample purification, or building vacuum pumps. He realized that this approach required a large team of technicians, each with particular expertise and that teamwork was important. The motto of Onnes’ laboratory was Door meten tot weten (Through measurement to knowledge). Techniques were a means to a greater end.

2. In Leiden, Onnes sought out theoretical advice from his colleague Johannes van der Waals (1837-1923).  [Almost 10 years ago I gave a talk about van der Waals legacy].

3. Onnes’ experiments were driven by a desire to answer fundamental questions. Questions he helped answer included the following.
Can any gas become liquid?
For gases is there a universal relationship between their density, pressure, and temperature?
How are gas-liquid transitions related to interactions between the constituent molecules in a material? At very low temperatures is the electrical conductivity of a pure metal zero, finite, or infinite?

The first of these questions motivated Onnes to pursue being the first to cool helium gas to low enough temperatures that it would become liquid. At the time all other known gases had been liquified. In 1908 his group observed that helium became liquid at a temperature of 4.2 K. This discovery was of both fundamental importance and great practical significance. Liquid helium became extremely useful in experimental physics and chemistry as a means to cool materials and scientific instruments. Indeed liquid helium enabled the discovery of superconductivity, which resulted from addressing the last question.


The figure shows Onnes (left) in his lab with van der Waals.

The discussion above closely follows Steve Blundell's Superconductivity: A Very Short Introduction.

Friday, September 20, 2019

Common examples of symmetry breaking

In his beautiful book, Lucifer's Legacy: The Meaning of Asymmetry, Frank Close gives several nice examples of symmetry breaking that make the concept more accessible to a popular audience.

One is shown in the video below. Consider a spherical drop of liquid that hits the flat surface of a liquid. Prior to impact, the system has continuous rotational symmetry about an axis normal to the plane of the liquid and through the centre of the drop. However, after impact, a structure emerges which does not have this continuous rotational symmetry, but rather a discrete rotational symmetry.



Another example that Close gives is illustrated below. Which napkin should a diner take? One on their left or right? Before anyone makes a choice there is no chirality in the system. However, if one diner chooses left others will follow, symmetry is broken and a spontaneous order emerges.


Thursday, August 29, 2019

My tentative answers to some big questions about CMP

In my last post, I asked a number of questions about Condensed Matter Physics (CMP) that my son asked me. On reflection, my title ``basic questions" was a misnomer, because these are actually rather profound questions. Also, it should be acknowledged that the answers are quite personal and subjective. Here are my current answers.

1. What do you think is the coolest or most exciting thing that CMP has discovered? 

Superconductivity.

explained?

BCS theory of superconductivity.
Renormalisation group (RG) theory of critical exponents.

2. Scientific knowledge changes with time. Sometimes long-accepted ``facts''  and ``theories'' become overturned.  What ideas and results are you presenting that you are almost absolutely certain of? 

Phase diagrams of pure substances.
Crystallography.
Landau theory and symmetry breaking as a means to understand almost all phase transitions.
RG theory.
Bloch's theorem and band theory as a framework to understand the electronic properties of crystals.
Quantisation of vortices.
Quantum Hall effects.
Emergence.

What might be overturned?

I will be almost certain of everything I will write about in the Very Short Introduction. This is because it centers around concepts and theories that have been able to explain a very wide swathe of experiments on diverse materials and that have been independently reproduced by many different groups.
I am deliberately avoiding describing speculative theories and the following.
Ideas, results, and theories based on experiments that did not involve the actual material claimed, involved significant curve fitting, or large computer simulations.
Many things published in luxury journals during the last twenty years.

3. What are the most interesting historical anecdotes? 

These are so interesting and relevant to major discoveries that they are worth including in the VSI.
Graphene and sellotape.
Quasi-crystals.
Bardeen's conflict with Josephson.
Abrikosov leaving his vortex lattice theory in his desk drawer because Landau did not like it.

What are the most significant historical events? 

Discovery of x-ray crystallography
Discovery of superconductivity.
Landau's 1937 paper.
BCS paper.
Wilson and Fisher.

Who were the major players?

They are so important that they are worthy of a short bio in the text.
Onnes.
Landau.
Bardeen.
Anderson.
Wilson.

4. What are the sexy questions that CMP might answer in the foreseeable future?

Is room-temperature superconductivity possible?

Friday, August 23, 2019

Basic questions about condensed matter

I am trying out draft chapters of Condensed matter physics: A very short introduction, on a few people who I see as representative of my target audience. My son is an economist but has not studied science beyond high school. He enjoys reading widely. He kindly agreed to give me feedback on each draft chapter. Last week he read the first two chapters and his feedback was extremely helpful. He asked me several excellent questions that he thought I should answer.

1. What do you think is the coolest or most exciting thing that CMP has discovered? explained?

2. Scientific knowledge changes with time. Sometimes long-accepted ``facts''  and ``theories'' become overturned? What ideas and results are you presenting that you are almost absolutely certain of? What might be overturned?

3. What are the most interesting historical anecdotes? What are the most significant historical events? Who were the major players?

4. What are the sexy questions that CMP might answer in the foreseeable future?

I have some preliminary answers. But, to avoid prejudicing some brainstorming, I will post later.
What answers would you give?

Tuesday, August 20, 2019

The global massification of universities

A recent issue of The Economist has an interesting article about the massive expansion in higher education, both private and public, in Africa.
The thing I found most surprising and interesting is the graphic below.


It compares the percentage of the population within 5 years of secondary school graduation are enrolled in higher education, in 2000 and 2017. In almost all parts of the world the percentage enrollment has doubled in just 17 years!
I knew there was rapid expansion in China and Africa, but did not realise it is such a global phenomenon.

Is this expansion good, bad, or neutral?
It is helpful to consider the iron triangle of access, cost, and quality. You cannot change one without changing at least one of the others.

I think that this expansion is based on parents, students, governments, and philanthropies holding the following implicit beliefs uncritically. Based on the history of universities until about the 1970s. Prior to that universities were fewer, smaller, more selective, had greater autonomy (both in governance, curriculum, and research).

1. Most students who graduated from elite institutions went on to successful/prosperous careers in business, government, education, ...

2. Research universities produced research that formed the foundation for amazing advances in technology and medicine, and gave profound new insights into the cosmos, from DNA to the Big Bang.

Caution: the first point does not imply that a university education was crucial to the graduates' success. Correlation and causality are not the same thing. The success of graduates may be just a matter of signaling.  Elite institutions carefully selected highly gifted and motivated individuals who were destined for success. The university just certified that the graduates were ``hard-working, smart, and conformist.''

But the key point is these two observations (beliefs) concern the past and not the present. Universities are different.  Massification and the stranglehold of neoliberalism (money, marketing, management, and metrics) mean that universities are fundamentally different, from the student experience to the nature of research.

According to Wikipedia,
Massification is a strategy that some luxury companies use in order to attain growth in the sales of product. Some luxury brands have taken and used the concept of massification to allow their brands to grow to accommodate a broader market.
What do you think?
Are these the key assumptions?
Will massification and neoliberalism undermine them?

Tuesday, August 13, 2019

J.R. Schieffer (1931-2019): quantum many-body theorist

Bob Schrieffer died last month, as reported in a New York Times obituary.

Obviously, Schrieffer's biggest scientific contribution was coming up with the variational wave-function for the BCS theory of superconductivity.
BCS theory was an incredible intellectual achievement on many levels. Many great theoretical physicists had failed to crack the problem. The elegance of the theory was manifest in the fact that it was analytically tractable, yet could give a quantitative description of diverse physical properties in a wide range of materials. BCS also showed the power of using quantum-field-theory techniques in solid state theory. This was a very new thing in the late 50s. Then there was the following cross-fertilisation with nuclear physics and particle physics (e.g. Nambu).

Another significant contribution was the two-page paper from 1966 that used a unitary transformation to connect the Kondo model Hamiltonian to that of the Anderson single impurity model. In particular, it gave a physical foundation for the Kondo model, which at the time was considered somewhat ad hoc.
John Wilkins wrote a nice commentary on the background history and significance of the Schrieffer-Wolff transformation.
The SW transformation is an example of a general strategy of finding an effective Hamiltonian for a reduced Hilbert space. This can also be done via quasi-degenerate perturbation theory. In different words, when one ``integrates out'' the charge degrees of freedom in the Anderson model one ends up with the Kondo model.

There is also the Su-Schrieffer-Heeger model, that is related to Heeger's Nobel Prize in Chemistry. However, although this spawned a whole industry (that I worked in as a postdoc with Wilkins) its originality and significance is arguably not comparable to BCS and SW.

Because of when he was born, like many of the pioneers of quantum many-body theory, Schrieffer may have been born for success?

I am somewhat (scientifically) descended from Schrieffer because I did a postdoc with John Wilkins, who was one of Schrieffer's first PhD students. My main interaction with Schrieffer was during 1995-2000. Each year I would visit my collaborator, Jim Brooks, at the National High Magnetic Field Laboratory, and would have some helpful discussions with Schrieffer. During one of those visits, I stumbled across a compendium of reprints from a Japanese lab. [This was back in the days when some people snail-mailed out such things to colleagues]. It had been sent to Schrieffer and contained a copy of a paper by Kino and Fukuyama on a Hubbard model for organic charge transfer salts. That was the starting point for my work on that topic.

Tuesday, August 6, 2019

What is the mass of a molecular vibration?

This is a basic question that I have been puzzling about. I welcome solutions.

Consider a diatomic molecule containing atoms with mass m1 and m2. It has a stretch vibration that can be described by a harmonic oscillator with a reduced mass mu given by
.
Now consider a polyatomic molecule containing N atoms.
It will have 3N-6 normal modes of vibration.
[The 6 is due to the fact that there are 6 zero-frequency modes: 3 rigid translations and 3 rotations of the whole molecule].
In the harmonic limit, the normal mode problem is solved below.
[I follow the classic text Wilson et al., Molecular Vibrations].
The problem is also solved in matrix form in Chapter 6 of Goldstein, Classical Mechanics].



One now has a collection of non-interacting harmonic oscillators. All have mass = 1. This is because the normal mode co-ordinates have units of length * sqrt(mass).

The quantum chemistry package Gaussian does more. It calculates a reduced mass mu_i for each normal mode i using the formula below.
This is discussed in these notes on the Gaussian web site. From mu_i and the normal mode frequency_i it then calculates the spring constant for each normal mode.

I have searched endlessly, and tried myself, but have not been able to answer the following basic questions:

1. How do you derive this expression for the reduced mass?
2. Is this reduced mass physical, i.e. a measurable quantity?

Similar issues must also arise with phonons in crystals.

Any recommendations?

Tuesday, July 23, 2019

Different approaches to popular science writing

Since I am working on a Very Short Introduction (VSI) to condensed matter physics I am looking at a lot of writing about science for popular audiences. I have noticed several distinct approaches that different authors take. They all have strengths and weaknesses.

Historical
The story of discoveries and the associated scientists is told. A beautiful example is A Short History of Nearly Everything by Bill Bryson.
When done well this approach has many positives. Stories can be fun and easy to read, particularly when they involve quirky personalities, serendipity, and fascinating anecdotes. Furthermore, this shows how hard and messy real science is, and that science is a verb, not just a noun. On the other hand, it can be a bit challenging for readers as they have to understand not just the successes but also why certain theories, experiments, and interpretations were wrong along the way.  Many writers also seem eager to burden readers will all sorts of historical background details about scientists, their families, and their local context. Sometimes these details are interesting. Other times they seem just boring fluff. Generally, most agree that one does not learn and understand a scientific subject best by learning its history. So why take this approach in popular writing?

Literary pleasure
People read novels and watch movies for pleasure. The goal is not necessarily to learn something (or a lot). I would put Brian Cox's writing and documentaries in this category. That is not a criticism. Rather than provide a lot of information I think the goal is more to induce awe, wonder, curiosity, and enjoyment.

Condensed textbook
Take an introductory text and cover all the same topics in the same order. Just cut out technical details and jargon. Lots of analogies are used to explain concepts. The obvious strength is the reader gets a good overview of the subject. The weakness is that this can be boring, involve defining a lot of terminology, and is actually too hard for the reader. One scary consequence is that some readers actually think they now actually understand the subject.

Hype
This comes in several forms. One is that the theory or topic of interest (whether complexity, quantum information, self-organised criticality, sociobiology, ....) is THE answer. It explains everything. The second form of hype is technological: this science is going to lead a new technology that will change the world. Generally, this fits the genre of ``science as salvation''.

Conceptual
An example is Laughlin's A Different Universe. A challenge is that this requires readers to like learning new concepts and have an ability to think abstractly.

Except for hype, I think all of these approaches have their merits. Ideally, one would like to incorporate elements of all of them.

What do you think? Are there other approaches?

Friday, June 28, 2019

The bloody delusions of silicon valley medicine

On a recent flight, I watched the HBO documentary The Inventor: Out for Blood in Silicon Valley. It chronicles the dramatic rise and fall of Elizabeth Holmes, founder of a start-up, Theranos, that claimed to have revolutionised blood testing.



There is a good article in the New Republic
What the Theranos Documentary Misses
Instead of examining Elizabeth Holmes’s personality, look at the people and systems that aided the company’s rise.

In spite of the weaknesses described in that article, the documentary made me think about a range of issues at the interface of science, technology, philosophy, and social justice.

The story underscores Kauzmann's maxim, ``people will often believe what they want to believe rather than what the evidence before them suggests they should believe.''

Truth matters. Eventually, we all bounce up against reality: scientific, technological, economic, legal, ...  It does not matter how much hype and BS one can get away, eventually, it will all come crashing down. It is just amazing that some people seem to get away with it for so long...
This is why transparency is so important. A bane of modern life is the proliferation of Non-Disclosure Agreements. Although, I concede they have a limited role is certain commercial situations, they seem to be now used to avoid transparency and accountability for all sorts of dubious practises in diverse social contexts.

The transition from scientific knowledge to a new technology is far from simple. A new commercial device needs to be scalable, reliable, affordable, and safe. For medicine, the bar is a lot higher than a phone app! 

Theranos had a board featuring ``big'' names in politics, business, and military, such as Henry Kissinger, George Shulz, Daniel Mattis,.. All these old men were besotted with Holmes and more than happy to take large commissions for sitting on the board. Chemistry, engineering, and medical expertise were sorely lacking. However, even the old man with relevant knowledge Channing Robertson was a true believer until the very end.

Holmes styled herself on Steve Jobs and many wanted to believe that she would revolutionise blood testing. However, the analogy is flawed. Jobs basically took existing robust technology and repackaged and marketed it in clever ways. Holmes claimed to have invented a totally new technology. What she was trying to do was a bit like trying to build a Macintosh computer in the 1960s.

Wednesday, June 12, 2019

Macroscopic manifestations of crystal symmetry

In my view, the central question that Condensed Matter Physics (CMP) seeks to answer is:
How do the properties of a distinct phase in a material emerge from the interactions between the atoms of which the material is composed? 
CMP aims to find a connection between the microscopic properties and macroscopic properties of a material. This requires determining three things: what the microscopic properties are, what the macroscopic properties are, and how the two are related. None of the three is particularly straightforward. Historically, the order of discovery is usually: macroscopic, microscopic, connection. Making the connection between microscopic and macroscopic can take decades, as exemplified in the BCS theory of superconductivity.

Arguably, the central concept to describe the macroscopic properties is broken symmetry, which can be quantified in terms of an order parameter. Connecting this microscopics is not obvious. For example, with superconductivity, the sequence of discovery was experiment, Ginzburg-Landau theory, BCS theory, and then Gorkov connected BCS and Ginzburg-Landau.

When we discuss (and teach about) crystals and their symmetry we tend to start with the microscopic, particularly with the mathematics of translational symmetry, Bravais lattices, crystal point groups, ...
Perhaps this is the best strategy from a pedagogical point of view in a physics course.
However, historically this is not the way our understanding developed.
Perhaps if I want to write a coherent introduction to CMP for a popular audience I should follow the historical trajectory. This can illustrate some of the key ideas and challenges of CMP.

So let's start with macroscopic crystals. One can find beautiful specimens that have very clean faces (facets).


Based on studies of quartz, Nicolas Steno in 1669 proposed that ``the angles between corresponding faces on crystals are the same for all specimens of the same mineral".  This is nicely illustrated in the figure below which looks at different cross-sections of a quartz crystal. The 120-degree angle suggests an underlying six-fold symmetry. This constancy of angles was formulated as a law by Romé de l'Isle in 1772.


Rene Just Hauy then observed that when he smashed crystals of calcite that the fragments always had the same form (types of facets) as the original crystal. This suggested some type of translational symmetry, i.e. that crystals were composed of some type of polyhedral unit. In other words, crystals involve a repeating pattern.

The mathematics of repeating units was then worked out by Bravais, Schoenflies, and others in the second half of the nineteenth century. In particular, they showed that if you combined translational symmetries and point group symmetries (rotations, reflections, inversion) that there were only a discrete number of possible repeat structures.

Given that at the beginning of the twentieth century, the atomic hypothesis was largely accepted, particularly by chemists, it was also considered reasonable that crystals were periodic arrays of atoms and molecules. However, we often forget that there was no definitive evidence for the actual existence of atoms. Some scientists such as Mach considered them a convenient fiction. This changed with Einstein's theory of Brownian motion (1905) and the associated experiments of Jean Perrin (1908). X-ray crystallography started in 1912 with Laue's experiment. Then there was no doubt that crystals were periodic arrays of atoms or molecules.

Finally, I want to mention two other macroscopic manifestations of crystal symmetry (or broken symmetry): chirality and distinct sound modes (elastic constants).

Louis Pasteur made two important related observations in 1848. All the crystals of sodium ammonium tartrate that he made could be divided into two classes: one class was the mirror image of the other class. Furthermore, when polarised light traveled through these two classes, the polarisation was rotated in opposite directions. This is chirality (left-handed versus right-handed) and means that reflection symmetry is broken in the crystals. The mirror image of one crystal cannot be superimposed on the original crystal image. The corresponding (trigonal) crystals for quartz are illustrated below.


Aside. Molecular chirality is very important in the pharmaceutical industry because most drugs are chiral and usually only one of the chiralities (enantiomers) is active.

Sound modes (and elasticity theory) for a crystal are also macroscopic manifestations of the breaking of translational and rotational symmetries. In an isotropic fluid, there are two distinct elastic constants and as a result, two distinct sound modes. Longitudinal and transverse sound have different speeds. In a cubic crystal, there are three distinct elastic constants and three distinct sound modes. In a triclinic crystal (which has no point group symmetry) there are 21 distinct elastic constants. Hence, if one measures all of the distinct sound modes in a crystal, one can gain significant information about which of the 32 crystal classes that crystal belongs too. (See Table A.8 here).

Aside: the acoustic modes in a crystal are the Goldstone bosons that result from the breaking of the symmetry of continuous and rotational translations of the liquid.

This post draws on material from the first chapter of Crystallography: A Very Short Introduction, by A.M. Glazer.

Friday, May 31, 2019

Max Weber on the evolution of institutions

Max Weber is one of the founders of sociology. This post is about two separate and interesting things I recently learned about him.

A while ago I discussed Different phases of growth and change in human organisations, based on a classic article from Harvard Business Review. [Which had no references or data!]
My friend Charles Ringma recently brought to my attention somewhat related ideas from Max Weber.
According to Wikipedia

Weber distinguished three ideal types of political leadership (alternatively referred to as three types of domination, legitimisation or authority):[52][111]
  1. charismatic domination (familial and religious),
  2. traditional domination (patriarchspatrimonialismfeudalism) and
  3. legal domination (modern law and state, bureaucracy).[112]
In his view, every historical relation between rulers and ruled contained such elements and they can be analysed on the basis of this tripartite distinction.[113] He notes that the instability of charismatic authority forces it to "routinise" into a more structured form of authority.[79]

I also learnt that Weber had a long history of mental health problems. According to Wikipedia

In 1897 Max Weber Sr. died two months after a severe quarrel with his son that was never resolved.[7][37] After this, Weber became increasingly prone to depression, nervousness and insomnia, making it difficult for him to fulfill his duties as a professor.[17][26] His condition forced him to reduce his teaching and eventually leave his course unfinished in the autumn of 1899. After spending months in a sanatorium during the summer and autumn of 1900, Weber and his wife travelled to Italy at the end of the year and did not return to Heidelberg until April 1902. He would again withdraw from teaching in 1903 and not return to it till 1919. Weber's ordeal with mental illness was carefully described in a personal chronology that was destroyed by his wife. This chronicle was supposedly destroyed because Marianne Weber feared that Max Weber's work would be discredited by the Nazis if his experience with mental illness were widely known.[7][38]

This puts Weber in a similar class to many other distinguished scholars who had significant mental health problems: Boltzmann, John Nash, Drude, Michel Foucault, ...