Friday, August 26, 2016

What are the worst nightmare materials?

Not all materials are equal. Over the years I have noticed that there are certain materials that are rich, complex, and controversial.

Common problems (opportunities) are that it is extremely hard to control their chemical composition, they may have many competing ground states, tendency to inhomogeneity and instability, structural phase transitions, sensitivity to impurities (especially oxygen and water), and surface and bulk properties can be significantly different. One never knows quite which material system is being measured, regardless of what authors and enthusiasts may claim.

Consequently, these materials can be an abundant source of spurious experimental results leading to endless debates about their validity and possible exotic theoretical interpretation.

Pessimist's view: the material is a minefield for both experimentalists and theorists and with time the "exciting" results will disappear. They are a scientific nightmare. Be skeptical. Avoid.

Optimist's view: this is exciting science and there are promising technological applications. Jump in. With time we will sort out all the details.

Here are some of my candidates for the "best/worst" nightmare materials I have encountered.

Cerium oxides: controlling the stoichiometry is very tricky and chemical and physical properties vary significantly with oxygen content. Yet because of (or in spite of ?!) they have significant industrial applications...

water: polywater, "memory", and the "liquid-liquid" transition in supercooled phase....

1T-TaS2: it undergoes multiple charge density wave transitions as the temperature is lowered, there is "star of David" charge density wave order with a thirteen (!) site unit cell, a Mott insulator transition, superconductivity upon doping, and ultrafast electrical switching behaviour, ...

purple bronze, Li0.9Mo6O17: superconductivity, non-Fermi liquid, large thermopower, ...

melanin....

What do you think are the "best" nightmare materials?

Wednesday, August 24, 2016

Subtleties in the theory of the diamagnetic susceptibility of metals

A magnetic field can couple to the electrons by two distinct mechanisms: by the Zeeman effect associated with the spin of the electrons and via the orbital motion of the electrons.

In the absence of spin-orbit coupling the Zeeman effect is isotropic in the direction of the magnetic field and leads to Paul paramagnetism.

The orbital motion, leads to Landau diamagnetism, and free electrons with a parabolic dispersion (and mass m) in three dimensions the magnitude is one-third (and opposite in sign) to that of Pauli susceptibility.

What happens for a parabolic band with effective mass m*?
The Pauli susceptibility is enhanced by m*/m and the Landau susceptibility is reduced by m/m*. Thus in semiconductors (where m* can be much less than m) the latter can become dominant.
In a simple Fermi liquid enhancing the interactions will make the spin susceptibility even more dominant over the orbital susceptibility.

What happens in the presence of a band structure?
This problem was "solved" by Peierls in 1933, leading to this formula.


Aside: this is the paper where he introduced the famous Peierls factor for effect of an orbital magnetic field on a tight-binding Hamiltonian.

However, according to two recent papers there is more to the story.

Geometric orbital susceptibility: quantum metric without Berry curvature
Frédéric Piéchon, Arnaud Raoux, Jean-Noël Fuchs, Gilles Montambaux

Orbital Magnetism of Bloch Electrons I. General Formula
Masao Ogata and Hidetoshi Fukuyama

What happens in the presence of electron-electron interactions?
This is the question I am ultimately interested in. Clearly in a Fermi liquid regime, strong correlations will enhance m*/m (or equivalently reduce the effective band width) and reduce the relative importance of the orbital susceptibility. However, in other regimes such as in bad metal it is not clear. One treatment of the effect of spin fluctuations is here.

Tuesday, August 23, 2016

Violation of AdS-CFT bounds on the shear viscosity

Tomorrow I am giving a seminar on the absence of quantum limits to the shear viscosity in the Theoretical Physics department at the Stefan Institute in Ljubljana, Slovenia.

Here is the current version of the slides.
The main results are in this paper.


This is Lake Bled, a popular tourist destination outside the city.


Friday, August 19, 2016

Signatures of strong vs. weak coupling in the superconducting phase?

Superconductivity in strongly correlated systems such as cuprates, organic charge transfer salts, and the Hubbard model presents the following interesting puzzle or challenge.

On the experimental side the superconducting phase can extend from a region of strong correlation (close proximity to the Mott insulator) to one of weak correlation (a Fermi liquid metal with a small mass enhancement).

On the theoretical side, one can obtain the d-wave superconducting state from a weak coupling approach (renormalisation group or random phase approximation) or a strong coupling approach such as an RVB variational wave function.
Aside: This also relates to the challenge/curse of intermediate coupling.

Given that in the two extremes the superconducting state emerges as an instability from two very different metallic states, the questions are:
What signatures or properties does the superconducting state (or "mechanism") have of these two distinct regimes (strong vs. weak coupling)?
Is it even possible that there is actually a phase transition (or at least a crossover) between different superconducting states?

Here is a partial answer, following this paper
Energetics of superconductivity in the two-dimensional Hubbard model 
E. Gull and A. J. Millis

In the weak coupling regime (smaller U, higher doping) the superconducting state becomes stable (as for traditional BCS theory) due to the fact that the potential energy decreases by more than the increase in kinetic energy.
In contrast, in the strong coupling regime (large U, lower doping, in the pseudogap region) the opposite occurs. The superconducting state becomes stable because the kinetic energy decreases by more than the increase in potential energy.
This is summarised in the figure below.


Aside: note how the condensation energy (the energy difference) is much less than the absolute values of the kinetic and potential energy. This highlights how, as often the case in strongly correlated systems, there is a very subtle energy competition. This is one reason why theory is so hard and why one can observe many competing phases.

I thank Andre-Marie Tremblay, Peter Hirschfeld and other Aspen participants for stimulating this post.

Thursday, August 18, 2016

Signatures of strong electron correlations in the Hall coefficient of organic charge transfer salts

Superconducting organic charge transfer salts exhibit many signatures of strong electron correlations: Mott insulator, bad metal, renormalised Fermi liquid, ...

Several times recently I have been asked about the Hall coefficient. There really is little experimental data. More is needed. But, here is a sample of the data for the metallic phase.
Generally, increasing pressure reduces correlations and moves away from the Mott insulator. Almost all of these materials are at half filling and at high pressures there is well defined Fermi surface, clearly seen in angle dependent magnetoresistance and quantum oscillation experiments.

The figure below is taken from this paper. At low temperatures the Hall coefficient is weakly temperature dependent and has a value consistent with the charge carrier density, i.e., what one expects in a Fermi liquid. However, about 30 K, which is roughly the coherence temperature, corresponding to the crossover to a bad metal, R_H decreases significantly, and appears to change sign.


The next data is from this paper and shows measurements on two different samples of the same material.
Note how in the two samples for a pressure of 4 kbar the temperature dependence and magnitude is not the same. This should be a point of concern about the reliability of the measurements.
But, broadly one sees again a significant temperature dependence, particularly on the scale of the coherence temperature.

Finally, the data below is from a recent PRL, and is for a material that is argued to be away from half filling (doped with 0.11 holes per lattice site (dimer)).

At high pressures there are a large number of charge carriers and weak temperature dependence, consistent with a Fermi liquid with a "large" Fermi surface.
However, at low pressure (i.e. when the metal is more correlated) the Hall coefficient becomes large and temperature dependent.

I thank Jure Kokalj, Jernez Mravlje, Peter Prelovsek, and Andre-Marie Tremblay for stimulating discussions about the data.

I welcome any comments.
Later I will post about the theoretical issues.

Monday, August 15, 2016

Aspen versus Telluride

The Aspen Center for Physics is a unique and wonderful institution offering relaxed and stimulating workshops in the midst of great scenery. It has been the setting for many famous collaborations and papers.
Maybe it is an apocryphal story, but I heard that the theoretical chemists got jealous and so started the Telluride Science Research Center.

This (northern) summer I was privileged to spend time at both, and so I offer some friendly comparisons. Both are excellent and so if you have opportunity to attend either, I would encourage you to.

Participation. 
This is highly selective and mostly restricted to faculty, with a few postdocs. Workshops are small, with typically only about twenty participants. For Telluride you have to be invited and for Aspen you apply and are then selected.

Duration.
For Telluride most workshops run for 5 days. For Aspen they run for 3-4 weeks and participants must come for a minimum of two weeks. Apparently, in the good old days people used to stay for longer

Program.
For Telluride this is closer to a small conference with many talks during each day; although, some mornings or afternoons, and sometimes whole days are free. In contrast, in Aspen there are usually at most a couple of hours of talks, and sometimes none, on each day. The emphasis is really on informal interactions.

Housing.
In both cases this is arranged by the Center. In Aspen it is subsidised by an (NSF grant and so more affordable (e.g. $75 per week for "bachelor" housing = shared apartment).

Organisation.
Both Centers are run by very professional staff who take care of all the logistics. So, organisers sole responsibility is selecting participants and setting the program. Thus, if you want to organise a small workshop this is a very easy way to do it.

Offices.
Aspen has their own building with offices, so all participants have a desk in a shared office. Telluride meets in a local school and there are no desks for participants, which is fine since the programs are so busy.

Powerpoint vs. blackboards.
Something unique about Aspen is that most talks are on a blackboard. Generally, only experimentalists are allowed to use powerpoint. I really think this is a very positive thing as it significantly increases clarity and focuses on the key points.

Local scenery.
Although it is spectacular in both towns, I do think that Telluride is superior, because you can see massive snow covered peaks from within the town.

Gondola.
Again Telluride wins. The gondola is free.  Most days I take it to the top of the mountain just to bask in the views. In Aspen I have never taken the gondola because I am too cheap...

Hiking.
In both towns there are nice short hikes literally from the town. Both have trails along the river running through the town. However, for Telluride there are serious hikes you can do starting from the town or the top of the gondola. For Aspen, you have to drive out of town or pay to get the bus to Maroon Bells, which takes about an hour.

Altitude sickness.
Both towns are above 8,000 feet and so this is not unusual. It is strange that I have been to Telluride six times but never had a problem, but my last two times in Aspen I did have had a mild case. One important preventative measure is to drink lots of water.

Travel and accessibility.
The scenic locations in the Rocky mountains come with a cost. Neither is easy or cheap to get to. For both, one may have to fly through Denver, where flight delays and missed connections are not unusual. Some participants drive from Denver.

Public lectures.
Both Centers run regular lectures during one evening throughout the summer, given by some participant. These are often quite well attended by the local community or tourists. Given the demographics of both towns (the rich and powerful) I think this is a wise investment. You never know if there will be the next Moore, Gates, or Kavli in the audience...

Colloquia.
In Aspen there is a weekly colloquium, given by someone from one of the current workshops, that all participants are required to attend, in the hope of encouraging interaction between workshops. In the past two weeks I heard two excellent talks on biological physics, by K.C. Huang and Lucy Colwell.
Telluride does not do this. Maybe it should.

Physics versus chemistry.
Most of the Telluride workshops are on chemistry or biology, with a smattering on materials science, involving physicists. As far as I am aware Aspen doesn't do much to encourage interactions between physics and chemistry. I think both Centers could benefit from trying to facilitate this more.

Saturday, August 13, 2016

Diminishing returns and opportunity costs in science

Consider scientific productivity as a problem in economics. One has a limited amount of resources (time, money, energy, political capital) and one wants to maximise the scientific output. Here I want to stress that the real output is scientific understanding. This is not the same as numbers of papers, grants, citations, conferences, newspaper articles, ...

The limited amount of resources is relevant at all scales: individual, research group, fields of research, departments, institutions, funding agencies, ...

As time passes one needs to face the problem of diminishing returns with increased resources. Consider the following diverse set of situations.

Adding extra parameters to a theoretical model.

Continuing to work on developing a theory without advances.

Calculating higher order corrections to a theory in the hope of getting better agreement with experiment.

Applying for an extra grant.

Taking on another student.

In quantum chemistry using a larger basis set or a higher level of theory (i.e. more sophisticated treatment of correlations).

Developing new correlation exchange functionals for density functional theory (DFT).

Trying to improve an experimental technique.

Repeating measurements or calculations in the hope of finding errors.

When one starts out it is never clear that these efforts will bear fruit. Sometimes they do. Sometimes they don't. But inevitably, I think one has to face the law of diminishing returns.

These thoughts were stimulated by two events in the last week. One was reading Not Even Wrong: The failure of String Theory and the search for unity in physical law by Peter Woit. The second was being part of a workshop on superconductivity that featured many discussions about the high-Tc cuprate superconductors.
The book chronicles how in spite of thousands of papers over the past thirty years high energy theory has not really produced any ideas beyond the standard model that are relevant to experiment, or even a theory that is coherent.
I don't think the cuprates as a field is in such a dire straight. There are real experiments and concrete theoretical calculations. But it may be debatable whether we are gaining significant new insights. This is a hard problem on which we have made some real progress, but will we make more?

Even when one is making advances one needs to consider the useful economic concept of opportunity cost: if the resources were directed elsewhere would one produce greater scientific gains? This again applies at all scales, from personal to funding agencies.

So how does one decide to move on? When is it time to quit?
I think there is a highly subjective and personal element to deciding at what point one is at the point of diminishing returns.

On also needs to be careful because there are plenty of times in the history of science where individuals perservered for many years without progress, but eventually had a breakthrough.
e.g. Watson and Crick, John Kendrew and the first protein crystal structure, theory of superconductivity, ...

I welcome suggestions.
How do you decide when you are at the point of diminishing returns?
How do you decide when a research field or topic is at that point?