Monday, May 30, 2016

A basic but important skill: critical reading of experimental papers

Previously, I highlighted the important but basic skill of being skeptical. Here I expand on the idea.

An experimental paper may make a claim, "We have observed interesting/exciting/exotic effect C in material A by measuring B."
How do you critically assess such claims?
Here are three issues to consider.
It is as simple as ABC!

1. The material used in the experiment may not be pure A.
Preparing pure samples, particularly "single" crystals of a specific material of know chemical composition is an art. Any sample will be slightly inhomogeneous and will contain some chemical impurities, defects, ... Furthermore, samples are prone to oxidation, surface reconstruction, interaction with water, ... A protein may not be in the native state...
Even in a ultracold atom experiment one may have chemically pure A, but the actual density profile and temperature may not be what is thought.
There are all sorts of checks one can do to characterise the structure and chemical composition of  the sample. Some people are very careful. Others are not. But, even for the careful and reputable things can go wrong.

2. The output of the measurement device may not actually be a measurement of B.
For example, just because the ohm meter gives an electrical resistance does not mean that is the electrical resistance of the material in the desired current direction. There are all sorts of things that can go wrong with resistances in the electrical contacts and in the current path within the sample.
Again there are all sorts of consistency checks one can make. Some people are very careful. Others are not. But, even for the careful and reputable things can go wrong.

3. Deducing effect C from the data for B is rarely straightforward.
Often there is significant theory involved. Sometimes, there is a lot of curve fitting. Furthermore, one needs to consider alternative (often more mundane) explanations for the data.
 Again there are all sorts of consistency checks one can make. Some people are very careful. Others are not. But, even for the careful and reputable things can go wrong.


Finally, one should consider whether the results are consistent with earlier work. If not, why not?

Later, I will post about critical reading of theoretical papers.

Can you think of other considerations for critical reading of experimental papers?
I have tried to keep it simple here.

Thursday, May 26, 2016

The joy and mystery of discovery

My wife and I went to see the movie, The Man Who Knew Infinity, which chronicles the relationship between the legendary mathematicians Srinivasa Ramanujan and G.H. Hardy. I knew little about the story or the maths and so learnt a lot. I think one thing it does particularly well is capturing the passion that many scientists and mathematicians have about their research, including both the beauty of the truth we discover and the rich enjoyment of the finding it.

The movie obviously highlights the unique, weird, and intuitive way that Ramanujan was able surmise extremely complex formula without proof.


I subsequently read a little more. There is a nice piece on The Conversation, praising the movie's portrayal of mathematics. A post on the American Mathematical Society blog discusses the making of the movie including a discussion with the mathematician Ken Ono, who was a consultant. Stephen Wolfram also has a long blog post about Ramanujan.

I enjoyed reading the 1993 article Ramanujan for lowbrows that considers some "simple" results that most of us can understand, such as the taxi cab number 1729.

The formula below features in the movie. It is an asymptotic formula for number of partitions of an integer n.

It is interesting that this is useful in the statistical mechanics of non-interacting fermions in a set of equally spaced energy levels, in the micro canonical ensemble.
Indeed it is directly related to the linear in temperature dependence of the specific heat and the (pi^2)/3 pre factor!

This features in Problem 7.27 in the text by Schroeder, based on an article in the American Journal of Physics. See particularly the discussion around equation 8, with W(r) replaced by p(n).


Wednesday, May 25, 2016

A Ph.D is more than a thesis

I recently met two people who got Ph.Ds from Australian universities who told me their stories. I found them disappointing.

Dr. A had a very senior role in government. After his contract ended he spend one year writing a thesis, largely based on his experience, submitted it and was awarded a Ph.D. He is now a Professor at a (mediocre) private university directing a research centre on government policy.

Dr. B was a software engineer. He worked part time and enrolled in a Ph.D in computer science. He would come on campus about once a month to meet his supervisor. As far as I am aware this was the only interaction he ever had with anyone from the university. He never went to any seminars, talked to other students, or took courses.

I don't doubt that on some level the theses submitted by these students may be comparable to those of other students and "worthy" of a Ph.D.
However, a colleague recently pointed out that at almost all universities the first page of the thesis says something like:

"A thesis submitted in partial fulfilment of the requirements for a Ph.D"

I think some of most important things you can learn in a Ph.D are not directly related to the thesis, as I discussed when arguing that the class cohort is so important.

A related issue is that I consider that undergraduates who skip classes yet pass exams are not really getting an education.

Monday, May 23, 2016

What is the chemical potential?

I used to find the concept of the chemical potential rather confusing.
Hence, it is not surprising that students struggle too.
I could say the mantra that "the chemical potential is the energy required to add an extra particle to the system" but how it then appeared in different thermodynamic identities and the Fermi-Dirac distribution always seemed a bit mysterious.

However, when I first taught statistical mechanics 15 years ago I used the great text by Daniel Schroeder. He has a very nice discussion that introduces the chemical potential. He considers the composite system shown below, where a moveable membrane connects two systems A and B. Energy and particles can be exchanged between A and B. The whole system is isolated by the environment and so the equilibrium state is the one which maximises the total entropy of whole system.
Mechanical equilibrium (i.e. the membrane does not move) occurs if the pressure of A equals the pressure of B.

Thermal equilibrium (i.e. there is no net exchange of energy between A and B) occurs if the temperature of A equals that of B. Thus, temperature is the thermodynamic state variable that tells us where two systems are in thermal equilibrium.

Diffusive equilibrium (i.e. there no net exchange of particles between A and B) occurs if the chemical potential of particles in A equals that in B, where the chemical potential is defined as

Starting with this one can then derive various useful relations such as those between the Gibbs free energy and the chemical potential (dG= mu dN and G=mu N).
Thus, the chemical potential is the thermodynamic state variable/function that tells us whether or not two systems are in diffusive equilibrium.

Doug Natelson also has a post about this topic. He mentions the American Journal of Physics article on the subject by Ralph Baierlein, drawing heavily from his textbook. However, I did not find that article very helpful, particularly as he mostly uses a microscopic approach, i.e. statistical mechanics. (Aside: the article does have some interesting history in it though).
I prefer to first  use a macroscopic thermodynamic approach before a microscopic one as, I discussed in my post, What is temperature?

Thursday, May 19, 2016

Strong electron correlations in geophysics

There is some fascinating solid state physics in geology, particularly associated with phase transitions between different crystal structures under high pressure. This provides some interesting examples and problems when teaching undergraduate thermodynamics. One of many nice features of the text by Schroeder is that it has discussions and problems associated with these phase transitions.

However, I would not have thought that the electronic transport properties, and particularly the role of electron correlations, would be that relevant to geophysics. But, I recently learnt this is not the case. A really basic unanswered question in geophysics is the origin and stability of the earths magnetic field due to the geodynamo. It turns out that the magnitude of the thermal conductivity of solid iron at high pressures and temperatures matters. One must consider not just the relative stability of different crystal structures but also the relative contributions of electron-phonon and electron-electron scattering to the thermal conductivity.

There is a nice preprint
Fermi-liquid behavior and thermal conductivity of ε-iron at Earth's core conditions 
L. V. Pourovskii, J. Mravlje, A. Georges, S.I. Simak, I. A. Abrikosov

They report results that contradict those of a recent Nature paper that has now been retracted.
A few minor observations stimulated by the paper.

a. This highlights the power and success of the marriage of Dynamical Mean-Field Theory (DMFT)  with electronic structure calculations based on Density Functional Theory (DFT) approximations. It impressive that people can now perform calculations to address such subtle issues as the relative stability and relative strength of electronic correlations in different crystal structures.

b. The disagreement between the two papers boils down to thorny issues associated with numerically performing the analytic continuation from imaginary time to real frequency. This is a whole can of worms that requires a lot of caution.

c. Subtle issues such as the value of the Lorenz ratio (Wiedemann-Franz law) for impurities compared to that for a Fermi liquid turn out to matter.

d. I have semantic issues about the use of the term "non-Fermi liquid" in both papers. The authors associate it with a resistivity (for high temperatures) that is not quadratic in temperature. The system still has quasi-particles that adiabatically connect to those in a non-interacting fermion system, and to me it is a Fermi liquid.

Tuesday, May 17, 2016

Whoosh.... Just how fast are universities changing?

The world is changing rapidly. It is hard to keep up. Companies boom and bust overnight... Rush .. New technologies disrupt whole industries... Whoosh....  People continually change not just jobs but field....  Universities need to look out..... They need to change rapidly.... The web is totally transforming higher education... Tenure is outdated... Focus on the short term... You may not survive... Whoosh...

This is a common narrative. But is it actually true?

Late last year, The Economist had an interesting article "The Creed of Speed: Is business really getting quicker?"

They look at certain objective quantitative measures to argue that the (surprising) answer to the question is no.
It is worth reading the whole article, but here a few snippets.
The graph below shows how little some measures have changed in the past 10 years.

More creative destruction would seem to imply that firms are being created and destroyed at a greater rate. But the odds of a company dropping out of the S and P 500 index of big firms in any given year are about one in 20—as they have been, on average, for 50 years. About half of these exits are through takeovers. For the economy as a whole the rates at which new firms are born are near their lowest since records began, with about 8% of firms less than a year old, compared with 13% three decades ago. Youngish firms, aged five years or less, are less important measured by their number and share of employment.
I love the line:
People who use dating apps still go to restaurants.
So there is a puzzle: people feel things are changing rapidly but they actually are not.
A better explanation of the puzzle comes from looking more closely at the effect of information flows on businesses. There is no doubt that there are far more data coursing round firms than there were just a few years ago. And when you are used to information accumulating in a steady trickle, a sudden flood can feel like a neck-snapping acceleration. Even though the processes about which you know more are not inherently moving faster, seeing them in far greater detail makes it feel as if time is speeding up.
I think that there is another reason why the perception of change is greater than the reality: the existence of a whole industry of consultants and managers whose (highly prosperous) livelihood depends on "change management". They spend a lot of time, energy, and money selling the "rapid change" and "impending crisis" line.

The article concludes by emphasising the importance of long term investments, particularly by large firms.
New technologies spread faster than ever, says Andy Bryant, the chairman of Intel; shares in the company change hands every eight months. But to keep up with Moore’s Law the firm has to have long investment horizons. It puts $20 billion a year into plant and R and D. “Our scientists have a ten-year view…If you don’t take a long view it is hard to keep your production costs consistent with Moore’s Law.” 
And what about Apple, with the frantic antics of which this article began? Its directors have served for an average of six years. It has invested heavily in fixed assets, such as data centres, which will last for over a decade. It has pursued truly long-term strategies such as acquiring the capacity to design its own chips. Mr Cook has been in his post for four years and slogged away at the firm for 14 years before that. Apple is 39 years old, and it has issued bonds that mature in the 2040s. 
Forget frantic acceleration. Mastering the clock of business is about choosing when to be fast and when to be slow. 
Now what about universities, particularly large ones with good reputations?

First, they are changing much more slowly than companies and much less susceptible to "market" pressures.
John Quiggin did a comparative study, entitled Rank delusions, of the list of leading US companies and leading universities over the last 100 years. In contrast, to the companies the university rankings are virtually unchanged.

Second, if you consider institutions such as Harvard, Oxford, Georgia Tech, Ohio State, Indian Institutes of Technology, University of Queensland, they all have in some sense a unique "market" share with few (or no) competitors, particularly with respect to undergraduate student enrolments, within a certain geographic region (country or state). It is a pretty safe bet that they will be just as viable twenty or thirty years from now. They are not going to be like Kodak.

There are certainly cultural changes, such as the shift from scholarship to money to status.

But, we should not loose sight of the fact that the "core business" and "products" are not changing much. With regard to teaching, I still centre my solid state physics lectures around Ashcroft and Mermin. I may sometimes use powerpoint and get the students to use computer simulations. But what I write on the whiteboard and the struggle for me to explain it and for students to understand it are essentially the same as they were 40 years ago (i.e. before laptops, smart phones, the web, ...) when Ashcroft and Mermin was written.

What about research?
Well good research is just as difficult as it was in pre-web days and before metrics and MBAs. It may be easier to find literature and to communicate with colleagues via email and Skype. But, these are second order effects...

So what are the lessons here?

Foremost, faculty and administrators need to have a long term view.
We should be skeptical about the latest fads and crisis and the focus on investing for the future.
Becoming a good teacher takes many years of practise and experience.
The most significant research requires long term investments in developing and learning new techniques, including many false leads and failures.
Real scholarship takes time.
It is the quality of the faculty, not the administrative policies or the slickness of the marketing, that make a great university.
Attracting, nurturing, and keeping high-quality faculty is a long and slow process that requires stability and long term investments.

What do you think?
How rapidly are things changing?
How much do we need to adapt and change?

Friday, May 13, 2016

The power of simple free energy arguments

I love the phase diagram below and like to show it to students because it is so cute.


However, in terms of understanding, I always found it a bit bamboozling.

On monday I am giving a lecture on phase transformations of mixtures, closely following the nice textbook by Schroeder, Section 5.4.

Such a phase diagram is quite common.
Below is the phase diagram for the liquid-solid transition in mixtures of tin and lead.

Having prepared the lecture, I now understand the physical origin of these diagrams.

Eutectic [greek for easy melting] point is the lowest temperature at which the liquid is stable.

What is amazing is that one can understand these diagrams from simple arguments based on a very simple and physically motivated functional form for the Gibbs free energy that includes the entropy of mixing.
It is of the form

G(x) = C + D x + E x(1-x) + T [xlnx + (1-x)ln(1-x)]

where x is the mole fraction of the one substance in the mixture and T is the temperature.
The parameters C, D, and E are constants for a particular state.

The second term represents the free energy difference between pure A and pure B.
The third term represents the energy difference between A-B interactions and the average of A-A and B-B interactions. [I am not sure this is completely necessary].
The crucial last term represents the entropy of mixing (for ideal solutions).

Below one compares the G(x) curves for the three states: alpha (solid mixture with alpha crystal structure), beta, and liquid in order to construct the phase diagram.