Wednesday, April 30, 2014

Draft of Colloquium on Emergent States of Quantum Matter

Next week I am giving the Physics Department Colloquium at UQ. I am working hard at trying to follow David Mermin's advice, and make it appropriately basic and interesting. I am tired of listening too many colloquia that are more like specialist research seminars.

I would welcome any feedback on what I have done so far. Here is the draft of the abstract.
Emergent states of quantum matter 
When a system is composed of many interacting components new properties can emerge that are qualitatively different from the properties of the individual components. Such emergent phenomena leads to a stratification of reality and of scientific disciplines.
Emergence can be particularly striking and challenging to understand for quantum matter, which is composed of macroscopic numbers of particles that obey quantum statistics. Examples included superfluidity, superconductivity, and the fractional quantum Hall effect. I will introduce some of the organising principles for describing
such phenomena: quasi-particles, spontaneously broken symmetry, and effective Hamiltonians. I will briefly describe how these ideas undergird some of my own research on complex molecular materials such as organic charge transfer salts, fluorescent proteins, and hydrogen bonded complexes. The interplay of emergence and reductionism raises issues in philosophy and as to the best scientific strategy for describing complex systems.

Here is a very preliminary version of the slides. [Latest version is here].

Let me know of any ways to make any of this clearer and more interesting.

Tuesday, April 29, 2014

What are the ten most remarkable scientific ideas?

Feynman said the most important idea is that all things are made from atoms. On the weekend I listened to a short and fascinating talk by Bill Bryson The four most remarkable things I know.
So, I wondered what do I think? What are the ten most remarkable scientific ideas?

I have used the following rough criteria. The idea
  • is far from obvious
  • is often not thought about because we have become so used to it that we take it for granted 
  • may evoke not just an intellectual response but also a somewhat emotional one of wonder and awe
  • is profound but can be simply stated
  • is a specific law, principle, or property, rather than a general scientific idea such as that laws can be encoded mathematically, experiments must be repeated, the same laws apply everywhere in the universe.
Here is my first rough attempt at a list of the top ten, in no particular order. I hope it will generate some discussion.

1. The universe had a beginning.

2. Time has a direction.

3. The fundamental constants of nature are fine-tuned for life.

4. All elementary particles are identical.

5. Energy is quantised.

6. Particles are fields and fields are particles.

7. All of life has a common molecular template (DNA and proteins).

8. Everything is made from atoms. The periodic table of chemistry.

9. Evolution: many small genetic variations can produce biological diversity.

10. Emergence and reductionism. Complexity can emerge from simplicity.

Here are some runners up. Some are more specific versions of those above.

A. The genetic code. DNA prescribes protein synthesis.

B. Genetic information is encoded in DNA.

C. Water is a unique liquid with remarkable properties with important implications for biomolecular function.

D. Diffraction of waves [x-rays, electrons, neutrons] can be used to determine the atomic structure of materials.

E. The geometry of molecules and chemical reactivity is determined by quantum mechanics [and can be described by potential energy surfaces].

F. The second law of thermodynamics: entropy is a state function. Free energy determines stability of open systems.

G. Symmetry constrains physical laws; spontaneously broken symmetry leads to different physical interactions and states of matter.

H. Macroscopic properties are determined by microscopic properties.

I. Protein folding. Amino acid sequence uniquely determines protein structure which determines function.

I am missing anything about earth science due to my ignorance.

Presumably, others have compiled such lists and taught courses based on them. Please let me know. One example is a course by Robert Hazen and James Trefil. Each chapter is centred around a great idea.

What do you think?
How would you change the above lists?

Friday, April 25, 2014

Slow spin dynamics in the bad Hund's metal

I have the following picture of a bad metal. It is half way between a Fermi liquid and a Mott insulator. This means that although there is no energy gap, the electrons are almost localised. The imaginary part of their self energy is comparable to the electron bandwidth. Since they are almost localised they have slowly fluctuating local moments. Hence, the dynamical spin correlation function, chi_s(omega) should be narrow, on what energy scale I am not sure. Somehow I expect a qualitative change in chi_s(omega) as the temperature increases above the coherence temperature, as the system crosses over from the Fermi liquid to the bad metal.
But, I am not sure whether this picture is correct because as far as I am aware there are very few calculations of chi_s(omega), and particularly not its temperature dependence.

There are a few Dynamical Mean-Field Theory [DMFT] calculations at zero temperature, i.e., in the Fermi liquid regime, such as described here. 
In the Mott insulating phase there is a delta function peak, associated with non-interacting local moments, as described here.

There are a few calculations in imaginary time, but little discussion of exactly what this means in real time. I struggle to make the connection.
The figure below shows a DMFT calculation of the imaginary time spin correlation function for the triangular lattice, by Jaime Merino, and reported in this PRB.


Here, I want to highlight two DMFT calculations for multi-band Hubbard models including Hund's rule coupling.

Spin freezing transition and non-Fermi-liquid self-energy in a three-orbital model
Philipp Werner, Emmanuel Gull, Matthias Troyer, Andy Millis

Dichotomy between Large Local and Small Ordered Magnetic Moments in Iron-Based Superconductors
P. Hansmann, R. Arita, A. Toschi, S. Sakai, G. Sangiovanni, and Karsten Held

The first paper reports the phase diagram below, where n is the average number of electrons per lattice site. The solid vertical lines represent a Mott insulating phase.

The boundary determined between the Fermi liquid phase and the "frozen moment" phase is determined from the Figure below. The top set of curves are the spin-spin correlation function.


The important distinction is that in the Fermi liquid phase chi_s(tau beta=1/2) will go to zero linearly in temperature, whereas in the frozen moment regime it tends to a non-zero constant.

Similar results are obtained in the second paper, on a slightly different model, but they don't use the "frozen moment" language, and emphasise more the importance of the Hund's rule coupling.

Here, I have a basic question about the above results. The lowest temperature used in the Quantum Monte Carlo is T = t/100 [impressive], and so I wonder if this is above the Fermi liquid coherence temperature and if one could go to low enough temperatures one would recover a Fermi liquid.

It is known that the coherence temperature is often much smaller for two-particle properties, and Hund's rule can also dramatically lower it. Both points are discussed here.

Or is my question [bad metal vs. frozen moments] just a pedantic distinction? The important point is that, practically speaking, over a broad temperature range, the spins are effectively relaxing very slowly.

Anyway, I think there is some rich physics associated with spin dynamics in bad metals and hope it will be explored more in the next few years. I welcome discussion and particularly calculations!

Wednesday, April 23, 2014

Is publishing debatable conclusions now encouraged?

I am increasingly concerned about how many papers, particularly in luxury journals, publish claims and conclusions that appear (at least to me) to simply not follow from the data or calculations presented.
Is this problem getting worse?
Or am I just getting more sensitive about it?

Last year Nature published
Bounding the pseudogap with a line of phase transitions in YBa2Cu3O6+δ
The abstract states
Here we report that the pseudogap in YBa2Cu3O6+δ is a distinct phase, bounded by a line of phase transitions. The doping dependence of this line is such that it terminates at zero temperature inside the superconducting dome. From this we conclude that quantum criticality drives the strange metallic behaviour and therefore superconductivity in the copper oxide superconductors.
Let me examine separately the three claims I have highlighted in bold.

1. The claim that the line terminates at zero temperature is based on two data points! (the red dots in the figure below).

To be convinced of this claim I would like to see a lot more data points and particularly, extending down to a few Kelvin. Furthermore, even if you believe the authors are seeing a real continuous phase transition one wants to see that it does not terminate in a first-order line at some non-zero temperature.

1b. A line terminating at zero temperature [a quantum phase transition] is not the same as quantum criticality. Establishing that requires observing distinct features and scaling laws [e.g. a dephasing rate that is linear in temperature] at temperatures above the quantum critical point.

For the next two claims, it is important to distinguish causality and correlation. Just because one sees two things together does not mean that one causes the other. They could both be caused by some other underlying effect.

2. "quantum criticality drives the strange metal behaviour".
Here, it could be simply that in this material the phase transition and the strange metal occur at roughly the same doping. Furthermore, there are alternative explanations of the strange metal behaviour.

3. "and therefore superconductivity  in the cuprates".
I fail to see the logic. Varma's theory certainly connects quantum criticality, the strange metal, and superconductivity. But, there are alternatives that do not make this intimate connection. Hence, I can't see how it is legitimate to make this conclusion.

In contrast to the abstract, the last few sentences of the paper makes the more modest claim:
Our observed evolution of the pseudogap phase boundary from underdoped to overdosed establishes the presence of a quantum critical point inside the superconducting dome, suggesting a quantum-critical origin for both the strange metallic behaviour and the mechanism of superconducting pairing. 
Now, the story gets stranger. Look at version 1 of the paper on the arXiv. Presumably this is the version that was originally submitted to Nature. The abstract does not have the debatable sentences but instead the reasonable statements:
In slightly overdoped YBCO that transition is 20K below Tc, extending the pseudogap phase boundary inside the superconducting dome. This supports a description of the metallic state in cuprates where a pseudogap phase boundary evolves into a quantum critical point masked by the superconducting dome.
So, I would love to know whose idea it was to change the abstract? Is there any chance it was a Nature editor who wanted to "sex up" the paper?

But my real problem is not so much with this specific paper, but the many other cases I see, sometimes in non-luxury journals. Science is all about using rigorous thinking and experimentation to find out what is actually true, as best as we can tell. It is fine to speculate and to suggest possible correlations and causality. But that is totally different to claiming you have shown something to be true when you have not. We need to be precise in our language. I do think science is broken.

If we don't practice rigorous evidence-based thinking in our own community what right do we have to challenge politicians and business people who embrace climate change skepticism, opposition to vaccines, AIDS denialism, .....

Tuesday, April 22, 2014

A survival and sanity guide for new faculty

Occasionally I have conversations with young faculty starting out which often move to how stressful and frustrating their jobs are. I find it pretty disturbing how the system is drifting and some of the pressures put on young faculty.

So here is my advice to tenure-track faculty aimed to help preserve their sanity and to survive.
The post is not directed towards non-tenure-track people [adjunct faculty, research assistant professors, fixed-term lectureships....]. Their case is a whole different can of worms. Although, some of the advice below is still relevant. But an underlying assumption is that your institution wants to keep you and so provided that you  publish some papers, don't completely mess up your teaching, have some grad students, and get some funding then you will get probably tenure. So how do you stay sane?

1. Tune out the noise.
You will hear countless voices shouting and whispering from inside and outside the university about a host of issues that can easily distract you and/or demoralise you: government cutbacks, off-shore campuses, institutional reform, impact factors, public outreach,  MOOC's, metrics, money, teaching innovations,  restructuring, rankings.....
In particular, I think reading things like The Chronicle of Higher Education, The Times Higher Education Supplement, The Australian Higher Education Section, and listening to upper management is generally a mistake. Mostly what you will hear will concern the latest crisis or fad that will be forgotten in a few years. Furthermore, most of these factors are beyond your control or not directly relevant. Just focus on publishing a few papers and getting your teaching done. And, enjoying yourself...

2. Under prepare for teaching.
Don't be a perfectionist. The first time you teach a course there will be loose ends. In particular, avoid time consuming things like revamping a course, fancy power point presentations, teaching innovations, and complicated assessment. Save these for the third or fourth time you teach a course.

3. Don't take on every prospective Ph.D student.
The pressure to take on new students can be great. But mediocre students can consume large amounts of your time, not produce anything, and may not even graduate. That will probably be held against you, not against the student. A number of older faculty have told me that one of their regrets is that when starting out they were not discerning enough.

4. Don't take every opportunity to apply for funding.
This is not what many above you will tell you. That is because the more people applying the greater the institutions chance of getting funding.
But, writing grants, particularly for a novice, takes a lot of time. Most grants take about as long as writing a whole paper. Furthermore, success rates are very low. Sometimes it is better to take a pass and instead spend the time actually doing research and writing papers, that will improve your track record, and increase your chances next time you apply.

5. Make sure you have a senior advocate in your department.
You need at least one senior faculty member who knows what you are doing, likes you, and will "go in to bat" for you. Keep in regular touch.
Hopefully, she/he is not an "operator" or polarising personality, because then you may become a pawn in some grander scheme.

6. Get objective advice from outside your department and perhaps outside your institution.
Hopefully, the advice you get is not coloured by vested interests or the peculiar local slant on things, e.g., your chances of getting a particular grant or moving into a particular research area.
But, it may be. An outside perspective can balance things.

7. Don't compare yourself to your peers.

8. Don't take negative feedback from students personally. 
It may say more about them than you.

I welcome comments from tenure-track faculty as to whether this is relevant and helpful. It would also be good to hear from older faculty who survived. What would they do differently or the same?

Saturday, April 19, 2014

Four reasons why the USA has the best universities

Why does the USA have the best universities? It is not just that they have more money, as is claimed, for example here.

Hunter Rawlings is a former President of Cornell, and currently the President of the Association of American Universities, a consortium of 60 of the leading North American universities.
He recently gave a fascinating talk Universities on the Defensive. He states
Our colleges and universities became the best in the world for four essential reasons: 
1) They have consistently been uncompromising bastions of academic freedom and autonomy; 
2) they are a crazily unplanned mix of public and private, religious and secular, small and large, low-cost and expensive institutions, all competing with each other for students and faculty, and for philanthropic and research support; 
3) our major universities combined research and teaching to produce superior graduate programs, and with the substantial help of the federal government, built great research programs, particularly in science; 
4) our good liberal-arts colleges patiently pursued great education the old-fashioned way: individual instruction, careful attention to reading and writing and mentoring, passion for intellectual inquiry, premium on original thought. ..... education of the whole person for citizenship in a culture.  
He points out how 2. particularly presents problems for "top down" management, such as pursued by China.

How do Australian universities rate on the above four ingredients?
I would say pretty poorly. In particular, they are fairly homogeneous, all competing to be the same thing,  and largely driven by Federal government policy. For example, the Dawkins reforms of the late 1980s, including the abolition of tenure, changed them forever. This "neo-liberalism" has particularly undermined point 4. above.

Rawlings is skeptical about metrics.
Albert Einstein apparently kept a sign in his office that read, “Not everything that counts can be counted, and not everything that can be counted counts.” This aphorism applies all too well to our current rage for “accountability.” As Derek Bok  [a former President of Harvard] points out in his recent book Higher Education in America,  
“Some of the essential aspects of academic institutions — in particular the quality of the education they provide — are largely intangible, and their results are difficult to measure.” 
Frankly, this is an obvious point to make, but all of us have to make it, and often, in today’s commodifying world. Quantity is much easier to measure than quality, so entire disciplines and entire academic pursuits are devalued under the current ideology, which puts its premium on productivity and efficiency, and above all else, on money, as the measure.
I found reading Rawling's article quite refreshing. The message seems quite different from what I think I hear from Australian university leaders. But, perhaps I am mis-interpreting their messages.

Thursday, April 17, 2014

Even Sheldon Cooper has given up on string theory

More and more I look at Peter Woit's blog to get his take on string theory, cosmology, and high-energy physics. I think he is doing a great job challenging some of the lame arguments that are presented for the validity and importance of string theory. Even, worse is the multiverse....  I was shocked to read the claim made by Arkani-Hamed that if there are no supersymmetric partners found at the LHC then the multiverse must exist!
Then there is the Cambridge University Press book claiming that string theory represents a new paradigm for doing science: you don't need empirical support for a theory to be accepted as true!
I find all of this rather bizarre and scary....

Recently, Woit  pointed out that even Sheldon Cooper has given up on string theory.

Wednesday, April 16, 2014

A definitive experimental signature of short hydrogen bonds in proteins: isotopic fractionation

I have written several posts about the controversial issue of low-barrier hydrogen bonds in proteins and whether they play any functional role, particularly in enzyme catalysis.

A basic issue is to first identify short hydrogen bonds, i.e., finding a reliable method to measure bond lengths.
I recently worked through and a nice article,
NMR studies of strong hydrogen bonds in enzymes and in a model compound
T.K. Harris, Q. Zhao, A.S. Mildvan

Surely, these bond lengths just be identified with x-ray crystallography? No.
the standard errors in distances determined by protein X-ray crystallography are 0.1–0.3 times the resolution. For a typical 2.0 Å X-ray structure of a protein, the standard errors in the distances are ±0.2–0.6 Å, precluding the distinction between short, strong and normal, weak hydrogen bonds. 
[Aside: I also wonder whether the fact that X-ray crystal structures are refined with classical molecular dynamics using force fields that are parametrised for weak bonds is also a problem. Such refinements will naturally bias towards weak bonds, i.e., the longer bond lengths that are common in proteins. I welcome comment on this.]

The authors then discuss how NMR can be used for bond length determinations. One of these NMR "rulers" involves isotopic fractionation, where one measures how much the relevant protons exchange with deuterium in a solvent,


Essentially, the relative fraction [ratio of concentrations] in thermodynamic equilibrium,


is determined by the relative zero-point energy (ZPE) of a D relative to an H in the enzyme. As described in a key JACS article the ratio is given by a formula such as
where T is the temperature.

If Planck's constant was zero, this ratio would always be one. It would also be one if there was no change in the vibrational frequencies of the H/D when they move from the solvent to the enzyme. Generally, as the H-bond strengthens [R gets shorter] the frequency change gets larger and so the difference between H/D gets larger [see this preprint for an extensive discussion], and phi gets smaller. However, for very short bonds the frequencies harden and phi will get larger, i.e. there will be a non-monotonic dependence on R, the distance between the donor and acceptor. This was highlighted in an extensive review which contains the following sketch.

Harris, Zhao, and Mildvan consider a particular parametrisation of the H-bond potential to connect the observed fractionation ratio with bond lengths in a range of proteins. They generally find reasonable agreement with other methods of determining the length [e.g., NMR chemical shift]. In particular the resolution is much better than from X-rays.

Tuesday, April 15, 2014

Roaming: a distinctly new dynamic mechanism for chemical reactions

Until recently, it was thought that the dynamics of breaking a chemical bond could occur via one of two mechanisms. The first is simply that one stretches a single bond until the relevant atoms are a long way apart. The second mechanism is via a transition state [a saddle point on a potential energy surface], where the geometry of the molecule is rearranged so that it is "half way" to the products of the chemical reaction. The energy of the transition state relative to the reactants determines the activation energy of the reaction. Transition state theory establishes this connection. Catalysts work by lowering the energy of the transition state. Enzymes work brilliantly because they are particularly good at lowering this energy barrier. An earlier post considered the controversial issue of whether it is necessary to go beyond transition state theory to explain some enzyme dynamics.

I have been struggling through an interesting Physics Today article Roaming reactions: the third way by Joel Bowman and Arthur Suits.

What is roaming?
It is a large amplitude trajectory on the potential energy surface that begins as a bond stretching.
It is best illustrated by watching videos such as this one which is an animation of a picture similar to that below.

What are the experimental signatures of roaming?
The key is to be able to resolve the distribution of the energy and angular momentum of the product molecules. It seems that if the reaction proceeds via a transition state that puts severe constraints on these distributions.

I was very happy that this week the UQ Chemistry seminar was given by Scott Kable who has pioneered recent experimental studies of roaming.
An interesting anecdote is that Scott's 2006 PNAS paper about roaming is actually based on data he took when he was a postdoc in 1990. He never published it because he did not understand it. I wonder how often this happens in science. Years ago, I wrote a short post arguing that experimentalists should not have to be able to explain their data in order to publish it.

Scott's work involves a nice experiment-theory collaboration with Meredith Jordan, who has calculated the relevant potential energy surfaces that are used in the dynamical calculations. Without the calculations it would be hard to definitively establish that roaming was an actual mechanism.

There has been an important un-anticipated consequence of this research. It may have solved a long mystery in atmospheric chemistry: the origin of organic acids in the troposphere. [See this Science paper]. This is a nice example of how basic "pure" research can lead to solutions to "applied" problems.

Roaming has now been clearly established in several gas phase reactions for small molecules.
An outstanding question is whether roaming can play a significant role in condensed phase reactions. It may be that molecular collisions will mess up the roaming trajectory. But, it may be just a matter of relative timescales. I look forward to seeing how all this develops.

Friday, April 11, 2014

How 5 years of blogging has changed me

Last month marked the 5 year anniversary of this blog. My first post was a tribute to Walter Kauzmann. In hindsight, after almost 1500 posts, I think that was a fitting beginning. Kauzmann represented many of the themes of the blog: careful and thorough scholarship, theory closely connected to experiment, simple understanding before computation, hydrogen bonding, fruitful interaction between chemistry and physics, ….

Reflecting on this anniversary I realised that writing the blog has had a significant influence on me. Writing posts forces one to be more reflective. I think I have a greater appreciation of
  • good science: solid and reproducible, influential, ...
  • how important it is to good science, rather than just publishing papers
  • how hard it is to do good science
  • today, the practise of science is increasingly broken
  • the bleak long-term job prospects on most young people in science
  • the danger and limitations of metrics for measuring research productivity and impact
  • the importance of simple models and physical pictures
  • diabatic states as a powerful conceptual, model building, and computational tool in chemistry
  • the importance of Dynamical Mean-Field Theory (DMFT)
  • bad metals as a unifying concept for strongly correlated metals
I thank all my readers, and particularly those who write comments.
I greatly value the feedback.
I do want to see more comments and discussion!

Tuesday, April 8, 2014

What role does reasoning by analogy have in science?

Two weeks ago I went to an interesting history seminar by Dalia Nassar that considered a debate between the philosopher Immanuel Kant and his former student Johann Gottfried von Herder.
Kant considered that thinking by analogy had no role in science whereas Herder considered it did. Apparently, for this reason Kant thought that biology [natural history] could never be a real science. Thinking objects were fundamentally different from non-thinking objects.

One of the reasons I like going to these seminars is that they stimulate my thinking in new directions. For example, a seminar last year helped me understand that one of my "problems" is that I view science as a vocation rather than a career, perhaps in the tradition of Robert Boyle and the Christian virtuoso.

After the seminar I had a brief discussion with some of my history colleagues about what scientists today think about analogy. I think it plays a very important role, because it can help us understand new systems and phenomena in terms of things we already understand. But where people sometimes come unstuck is when they start to assume that the analogy is reality or the complete picture. Here are a few important historical examples.

   * Electromagnetic radiation. The analogy of light waves with sound and water waves helped. But went array when people thought there must be a medium, i.e. the aether.

  * Quantum mechanics. Particles and waves. Again the analogy helped understand interference and quantisation of energy levels. But I also think that pushing to hard the partial analogies with classical mechanics and classical waves is the source of some of the confusion about quantum measurement and the quantum-classical crossover.

  * Quantum field theory and many-particle physics. Feynman diagrams, path integrals, renormalisation, symmetry breaking, Higgs boson,…. there is a lot of healthy cross-fertilisation.

 * Imaginary time quantum theory and classical statistical mechanics. Path integral = Partition function.

Coincidentally, yesterday when I was in the library [yes, the real physical library not the virtual one!] trying to track down Wigner's quote I stumbled across a 1993 Physics Today review by Tony Leggett  of Grigory Volovik's book Exotic properties of superfluid 3He. Leggett expresses his reservations about analogies.
As to the correspondences with particle physics, being the kind of philistine who does not feel that, for example, his understanding of the Bloch equations of nmr is particularly improved by being told that they are a consequence of Berry's phase, I have to confess to greeting the news that the "spin-orbit waves" of 3He-A are the analog of the W boson and the "clapping" modes the analog of the graviton with less than overwhelming excitement. These analogies no doubt display a certain virtuosity, but it is not clear that they actually help our concrete understanding of either the condensed matter or the particle-physics problems very much, especially when they have to be qualified as heavily as is done here.
What do you think? Does analogy have an important role to play? When does it cause problems?

Monday, April 7, 2014

Giant polarisability of low-barrier hydrogen bonds

An outstanding puzzle concerning simple acids and bases is their very broad infrared absorption, as highlighted in this earlier post. The first to highlight this problem was Georg Zundel. His solution involved two important new ideas:
  • the stability of H5O2+ in liquid water, [the Zundel cation]
  • that such complexes involving shared protons via hydrogen bonding have a giant electric polarisability, several orders of magnitude larger than typical molecules.
Both ideas remain controversial. A consequence of the second is that the coupling of electric field fluctuations associated with the solvent of the complex will result in a large range of vibrational energies, leading to the continuous absorption. 

Later I will discuss the relative merits of Zundel's explanation. Here I just want to focus on understanding the essential physics behind the claimed giant polarisability. The key paper appears to be a 1972 JACS

Extremely high polarizability of hydrogen bonds
R. Janoschek , E. G. Weidemann , H. Pfeiffer , G. Zundel

[The essential physics seems to be in a 1970 paper I don't have electronic access to].
If one considers the one-dimensional potential for proton transfer within a Zundel cation with O-O distance of 2.5 Angstroms it looks like the double well potential below.

The two lowest vibrational eigenstates are separated in energy by a small tunnel splitting omega of the order of 10-100 cm-1. These two states can be viewed as symmetric and anti-symmetric combinations of oscillator states approximately localised in the two wells. The transition dipole moment p between these two states is approximately the well separation [here roughly 0.5 Angstroms (times of the proton charge)].

At zero temperature the electric polarisability is approximately, p^2/omega. Omega is at least an order of magnitude smaller than a typical bond stretching frequency and p^2 can be an order of magnitude larger than for a typical covalent bond.
Hence, the polarisability can be orders of magnitude larger than that for a typical molecule.

A few consequences of this picture are that the polarisability will vary significantly with
  • isotopic [H/D] substitution
  • temperature (on the scale of the tunnel splitting)
  • the donor acceptor distance.

Did Wigner actually say this?

It is folklore that Eugene Wigner said
"It is nice to know that the computer understands the problem. But I would like to understand it too."
But did he actually say it? Where and when?

I have been trying to track it down. The earliest reference I can find is in the beginning of Chapter 5 of a 1992 book by Nussenzweig, which just says it is attributed to Wigner.

It is a great quote so it would be nice to know that Wigner actually said. I welcome any more information.

Friday, April 4, 2014

The grand challenge of wood stoves and the rural poor

Today I went to a very interesting talk, presented by the UQ Energy Initiative, by Gautam Yadama.
He described the "Wicked problem" of the use of wood stoves by the rural poor in the Majority world.

This causes a multitude of problems including deforestation, climate change, household pollution, disability due to respiratory problems,…. Yet solutions are elusive, particularly because of poverty, cultural obstacles, gender inequality, technical problems, …. In particular, previous "top down" "solutions" such as the wide scale free distribution of 35 million gas stoves by the Indian government in the 70s and 80s [largely funded by the World Bank] have been complete failures. He described his multi-disciplinary research involving social scientists, engineers, and medical experts. Yadama emphasized the importance of community involvement and programs that are "evidence based" using randomised trials [similar to those featured in Poor Economics].

Yadama has just published a "coffee table" book Fires, Fuel, and the Fate of 3 Billion that describes the  problem and features striking photos that capture some of the human dimension to the problem.

Thursday, April 3, 2014

A basic but important research skill, 3: talking, asking, and listening

One of the quickest ways to learn about a research field is to talk to others working in the area. Trying to learn the fundamentals [key questions, techniques, background, …] by only reading can be a slow and inefficient process. Furthermore, key pieces of information, can be buried or not even there. So reading needs to be complemented by talking to others. They don't have to be the worlds leading expert.

Yet this is a very hard process and many students give up too easily. First, there is the problem of finding someone who both knows enough and will take the time to talk to you. Second, you will probably feel dumb. It requires courage and confidence to do this. You may not even know what questions to ask. Much of the jargon/language they use may be unfamiliar or meaningless. Third, it is just plain hard work and requires a sustained effort.
Theorists and experimentalists talking to each other presents a special set of challenges.
So does talking across disciplines (chemists and physicists, biologists and physicists, …)

Here are some basic questions to ask:
What are you working on?
Why is this important?
What are the key papers in the field? Why are they significant?
What is the "holy grail" of the field?
What "results" in the field do you think are dubious?
What is X? I don't understand. Can you explain it to me with a simple physical or chemical picture?
What are the limitations of this technique?
Could you explain that to me again?

Listen carefully.
Listen for concepts, names, pictures, graphs, equations, results, problems, …. that keep coming up.
Complement your discussions with reading. You will begin to know what to look for and certain things will start to stand out.

If I could have my time over I would have done a lot more talking to people.

Wednesday, April 2, 2014

Competing phases are endemic to strongly correlated electron materials

At the Journal Club for Condensed Matter Steve Kivelson has a nice commentary on a recent preprint
Competing states in the t/J model: uniform d-wave state versus stripe state
P. Corboz, T.M. Rice, and Matthias Troyer.

This paper highlights an important property [see, e.g., here and here] of strongly correlated electron systems. A characteristic and challenging feature is subtle competition between distinctly different ground states. 

For example, for the t-J model the authors find a broad range of parameters [t/J and doping x] that three phases are almost degenerate. The phases are
  • a spatially uniform d-wave superconducting state (USC) [which sometimes also co-exists with antiferromagnetism]
  • a co-existing charge density wave and d-wave superconducting state (CDW+SC)
  • a pair density wave (PDW) that includes superconducting pairing that spatially averages to zero and is closely related to the Larkin-Ovchinnikov-Fulde-Ferrell state.
The authors find that the energy differences between these states can be as small at t/1000.

To me, there are two important (pessimistic) implications for all of this. 

First, trying to find the true ground state from numerical calculations is going to be very difficult. Different methods will slightly bias towards one of these states and struggle to distinguish between them.

Second, experiments on the cuprates may (or may not) be full of red herrings. Slight differences between systems [e.g, BSSCO vs. YBCO] and between samples may lead to new features that are not necessarily of fundamental importance. To what extent are stripes, CDW order, Fermi surface pockets, inhomogeneity, … secondary rather than primary features of the pseudogap phase?  

On the positive side, this subtle competition helps explain the what is seen in a particular family of organic charge transfer salts, lambda-(BETS)2X. There by small changes in temperature, magnetic field, pressure, and chemical substitution, [with energy scales of the order a few Kelvin, approx. t/100] one can tune between Mott insulator, superconductor, metal, and possibly charge order.

Tuesday, April 1, 2014

The challenge of colossal thermoelectric power in FeSb2

There is an interesting paper
Highly dispersive electron relaxation and colossal thermoelectricity in the correlated semiconductor FeSb2
Peijie Sun, Wenhu Xu, Jan M. Tomczak, Gabriel Kotliar, Martin Søndergaard, Bo B. Iversen, and Frank Steglich.

The main results that are a struggle to explain are in the figure below.
The top panel shows the temperature dependence of the thermopower [Seebeck coefficient] of FeSb2 [red] and the isoelectronic FeAs2.
First, notice the vertical scale is tens of mV/K. In an elemental metal the thermopower is less than a microV/K. In a strongly correlated metal it can be tens of microV/K. [see for example this earlier post].
Why is it so large? Why is the Sb compound so much larger than the As compound?
In a simple model of a band semiconductor S ~ k_B/e * gap/k_B T. But here the Sb compound has the smaller gap.
Also, why is there a maximum in the temperature dependence, S(T) going to zero with decreasing temperature.


In an attempt to elucidate these subtle issues the authors have also measured the Nernst effect and the magnetoresistance. The Nernst signal is also colossal, being of the other of mV/KT for FeSb2, which is two orders of magnitude large than that for FeAs2.

The authors also consider a simple analytical model of a semiconductor with an energy dependent scattering rate to see what properties that can explain: some but not all. A strongly energy dependent scattering rate is also needed; this can occur in the case of Kondo physics, for example.
They also find some interesting relations between Seebeck, Nernst, Hall mobility, magnetoresistance, and the thermal mobility.

It is helpful to read the paper in conduction with an experimental review and this earlier theory paper,
Thermopower of correlated semiconductors: Application to FeAs2 and FeSb2
Jan M. Tomczak, K. Haule, T. Miyake, A. Georges, and G. Kotliar

To further complicate all of the above it seems that the results can be quite sample dependent, and the results varying significantly, even by orders of magnitude between different groups. One clue is that it seems that FeSb2 is very close to a metal-insulator transition, seen in some samples but not others…

Much remains to be done...