Monday, February 25, 2019

Management lessons not learned from the discovery of graphene

Don't follow the pack!

I just read the Random Walk to Graphene, by Andre Geim. It is the lecture he gave when receiving the 2010 Nobel Prize in Physics. I should have read it long ago but was motivated to read it now because the following sentence features in Joseph Martin's "purloined letter'' argument about why condensed matter physics lacks status.
Graphene has literally been before our eyes and under our noses for many centuries but was never recognized for what it really is.
I learned some nice science from the lecture. Foremost, it is a great story of scientific creativity, perseverance, and serendipity. However, I want to mention a few things that highlight how the story strongly conflicts with most views about how science is currently "managed" and people operate.

Geim starts by recounting his Ph.D. and early postdoc years. His Ph.D papers were cited twice, by co-authors.
The subject was dead a decade before I even started my Ph.D. However, every cloud has its silver lining and what I uniquely learned from that experience was that I should never torture research students by offering them “zombie” projects.
Several years later he worked on a new topic as a staff scientist in Russia.
This experience taught me an important lesson that introducing a new experimental system is generally more rewarding than trying to find new phenomena within crowded areas.
He notes that when after a six-month visiting postdoc in Nottingham he entered the Western postdoc market with an h-index of 1!

When he was in the Netherlands as a young faculty member in a high magnetic field lab he began to experiment in creative directions leading to investigations of "magnetic water" and the iconic experiment of the levitating frog for which he received an Ig Nobel Prize.
we saw balls of levitating water (Fig. 1). This was awesome. It took little time to realize that the physics behind this phenomenon was good old diamagnetism. It took much longer to adjust my intuition to the fact that the feeble magnetic response of water (105), that is billions of times weaker than that of iron, was sufficient to compensate the Earth’s gravity. Many colleagues, including those who worked with high magnetic fields all their lives, were flabbergasted, and some of them even argued that this was a hoax.... 

The levitation experience was both interesting and addictive. It taught me the important lesson that poking in directions far away from my immediate area of expertise could lead to interesting results, even if the initial ideas were extremely basic. This in turn influenced my research style, as I started making similar exploratory detours that somehow acquired the name “Friday night experiments.” The term is of course inaccurate. No serious work can be accomplished in just one night. It usually requires many months of lateral thinking and digging through irrelevant literature without any clear idea in sight. 
The story of the discovery of graphene using cellotape [Scotch tape, sticky tape] was more complicated, circuitous, and involved a lot more hard work than I realised.
There were two dozen or so [friday night] experiments over a period of approximately 15 years and, as expected, most of them failed miserably. But there were three hits, the levitation, gecko tape, and graphene. 
The story of the first publication is interesting. It took nine months to get the paper into Science.
First, we submitted the manuscript to Nature. It was rejected and, when further information requested by referees was added, rejected again. According to one referee, our report did “not constitute a sufficient scientific advance.” Science referees were more generous (or more knowledgeable?), and the presentation was better polished by that time. In hindsight, I should have saved the time and nerves by submitting to a second-tier journal, even though we all felt that the results were groundbreaking.
This is consistent with my belief that there is not a lot of correlation between great discoveries and publication in luxury journals.

So what should we learn from this story?
First, we should all be a little more adventurous and take some risks and explore new areas. Previously, I have argued successful researchers should move onto new hard problems. 
A lot of this relates to diminishing returns and opportunity costs.
Yet, unfortunately, there are now significant institutional and cultural pressures against this. However, I think senior faculty have a responsibility to buck these trends.

Second, funding agencies and university management really need to learn from this story of graphene. It really goes against metrics, KPIs, short term goals, making people "accountable" for extremely well-defined timetables and research outcomes, and forcing/hiring people to work on the latest hot topic.

Graphene is cool! And I am sure that there is a lot that remains to be discovered about graphene. However, I find it disturbing that so many people have flocked to the field. A few years ago I met a faculty member from Manchester and they said they were on the out because they were not working on graphene and there was a lot of pressure for people to be working on it.

There is another side to the story that I am not sure what to make of which has an Australian connection. When Alan Gilbert was vice-chancellor at the University of Melbourne he tried to build a parallel private for-profit institution, Melbourne University Private. This turned out to be a massive failure, wasting hundreds of millions of dollars. In 2004 Gilbert moved to Manchester as Vice Chancellor. Of course, his main goal was to lift Manchester in the global rankings.
The Wikipedia page about Gilbert states,
According to the university's strategic plan[8] (largely a copy of his [Gilbert's] earlier and now abandoned Melbourne Agenda (2002)[9]) the university aims to have five Nobel Laureates on its staff by 2015, at least two of whom will have full-time appointments, and three of which it is intended to secure by 2007. During Gilbert's tenure as vice chancellor, a Nobel Prize winner in economics, Joseph Stiglitz, was appointed the head of the Brooks World Poverty Institute at Manchester, and Sir John Sulston was appointed to a chair in the Faculty of Life Sciences. After Gilbert's death Andre Geimand Konstantin Novoselov, both of whom were appointed before Gilbert moved to Manchester, were awarded the Nobel Prize for Physics in 2010.
From the little I know about Gilbert it is very hard for me to see how he would have supported Geim's approach to doing science, particularly given that there were not well-defined immediate benefits to the corporate sector.

Tuesday, February 19, 2019

Superconducting order in organic charge transfer salts

A long-standing question for superconductivity in organic charge transfer salts concerns the symmetry of the superconducting order parameter. Is it unconventional (i.e. not s-wave) and if so are there nodes in the energy gap? Over the years there have been a wide range of claims, both theoretical and experimental.

Most recently a combined theory-STM experiment claimed the symmetry was d + s and that there were 8 nodes on the Fermi surface.

Two of my UQ colleagues recently posted a nice preprint that comes to a different conclusion.
Microwave Conductivity Distinguishes Between Different d-wave States: Umklapp Scattering in Unconventional Superconductors 
D. C. Cavanagh, B. J. Powell

Microwave conductivity experiments can directly measure the quasiparticle scattering rate in the superconducting state. We show that this, combined with knowledge of the Fermi surface geometry, allows one to distinguish between closely related superconducting order parameters, e.g., dx2y2 and dxy superconductivity. We benchmark this method on YBa2Cu3O7δ and, unsurprisingly, confirm that this is a dx2y2 superconductor. We then apply our method to κ-(BEDT-TTF)2Cu[N(CN)2]Br, which we discover is a dxy superconductor.
In 2005 Ben Powell  (and others) showed that the simplest RVB theory gives such an order parameter with nodes required by symmetry.
[Aside: in our paper, this is denoted d_x2-y2, but that is because of how the x-y axes are defined].

Thursday, February 14, 2019

Does a temperature dependent Hamiltonian make sense?

At the fundamental level, we think of a Hamiltonian as independent of temperature. It is describing the energy of all possible states of the system in the absence of any environment.

However, when one does mean-field theory (e.g. for an Ising model or BCS theory) the Hamiltonian involves temperature-dependent parameters that are determined self consistently.

I have been thinking about this because one of the proposed effective minimal Hamiltonians for spin crossover compounds is an Ising model with a temperature dependent field.
My immediate reaction was that this must be some sort of mean-field theory.
However, I now realise that is not the case.

Effective Hamiltonians can be temperature dependent without invoking any approximations. Temperature-dependent interactions can arise when one integrates out some degrees of freedom.

One can see this by simply considering the case of a system with two degrees of freedom x and q. The partition function can be written as a path integral where there is an action which involves the integral of the Lagrangian in imaginary time from 0 to 1/T where T is the temperature.
Integrating out x one obtains an effective action for q that will depend on temperature.



Here are three cases where this can be done explicitly.

1. The spin boson model. One integrates out the harmonic oscillators, leading to a ``Feynman-Vernon influence functional'' that is temperature dependent.

2. A two-state system in which each state has a series of sub-states (e.g. spin states or vibrational states). Consider the simple Hamiltonian.


This corresponds to the case of spin-crossover systems and one sees how one can end up with an Ising type model with a "field" that is related to the free energy difference between the two spin states.

3. A one-dimensional chain of spin-crossover molecules which have an elastic interaction that depends on the spin state. This is treated in
Elastic interaction among transition metals in one-dimensional spin-crossover solids 
K. Boukheddaden, S. Miyashita, and M. Nishino

The classical phonons are integrated out and one is left with an Ising chain of pseudo-spins in an external ``field'' where the "exchange" interaction and field depend on temperature.
[See equation (13) in the paper].

Tuesday, February 12, 2019

Public perceptions of condensed matter physics

Why are string theorists celebrities who write best-selling books and popular documentaries?
Why are cosmology and particle physics seen as "fundamental" and answering profound questions about "why we are here?" as they push back the frontiers of knowledge with their great intellects and imagination. In contrast, condensed matter physics gets little public attention and is not seen as exciting, "fundamental", or intellectually challenging.

There is a helpful and stimulating paper
Prestige Asymmetry in American Physics: Aspirations, Applications, and the Purloined Letter Effect
Joseph D. Martin
Why do similar scientific enterprises garner unequal public approbation? High energy physics attracted considerable attention in the late-twentieth-century United States, whereas condensed matter physics – which occupied the greater proportion of US physicists – remained little known to the public, despite its relevance to ubiquitous consumer technologies.... popular emphasis on the mundane technological offshoots of condensed matter physics and its focus on human-scale phenomena have rendered it more recondite than its better-known sibling field. News reports about high energy physics emphasize intellectual achievement; reporting on condensed matter physics focuses on technology. And whereas frontier-oriented rhetoric of high energy physics communicates ideals of human potential, discoveries that smack of the mundane highlight human limitations and fail to resonate with the widespread aspirational vision of science – a consequence I call “the purloined letter effect.”
What is this "purloined letter"??
Understanding prestige asymmetry requires discerning how the values communicated in the discourse of scientific discovery relate to the values and expectations of the surrounding society. Many in the United States see science as a source of faith in both individual potential and collective possibility, and look to it as a way to overcome human limitations. John H. Evans has documented “faith in science producing meaning” .... Science functions for many as “a source of societal hope – a way to save our society from its troubles, in the same way that societies have looked to other saviors, like religion”... Some rhetoric of scientific discovery, however, undercuts the narrative of science as a testament to human potential. When discoveries are presented as evidence that we have missed something obvious, it highlights our failings and limitations alongside our accomplishments. We can only recognize such achievements by also acknowledging our collective failure to discover earlier what was in front of our eyes all along. In these instances, scientific discoveries fail to promote the values that evidence suggests best resonate with consumers of scientific media. I call this the purloined letter effect, after Edgar Allan Poe’s 1844 short story in which a stolen letter hidden in plain sight is uncovered in a way that exposes the police, who had failed to find it, as mulish and unimaginative.
This narrative of "science as salvation", particularly in popular books, has also been discussed by Mary Midgley and by Gregory Schrempp. More recently Ian Hesketh has argued that Big History is in this genre.

Martin illustrates his argument by considering press reports about different Nobel Prizes.
Steven Weinberg, Sheldon Glashow, and Abdus Salam’s prize for electroweak unification .... Both the LA Times and the Tribune ... gave prominent billing to Weinberg’s and Glashow’s statements about the fundamental importance of their work for understanding the way the universe works – and its manifest absence of practical applications. The .. [New York Times] toasted “a theory so profound as to affect man’s perception of existence” . 
The 1970s condensed matter prizes all recognized fundamental contributions, in particular theoretical developments in magnetism and work on the quantum properties of solids. US papers nevertheless routinely described these contributions as undergirding technological developments, with efforts to explain the content of the research either perfunctory or absent. 
The 1977 prize recognized theorists Philip Anderson, John Van Vleck, and Nevill Mott. The Nobel committee cited them “for their fundamental theoretical investigations of the electronic structure of magnetic and disordered systems.” The NY Times reported that the winners “were cited for work underlying the development of computer memories, office copying machines and many other devices of modern electronics,” and made little effort to clarify the theoretical work behind the prize. The AP report pointed to lasers, better glass, and copper IUDs ... Reuters tied the laureates’ “‘solid state’ physics theories” to “computer memories, pocket calculators, modern radios, office copiers, and solar energy converters” The emphasis was not only squarely on technology, but disproportionately on the work-a-day technologies that were becoming part of the furniture of Cold War America. High energy physics changed our perceptions of our very existence; condensed matter was the physics of photocopiers.
I thank Andrew Zangwill for bringing the paper to my attention.

I think that condensed matter physics is intellectually challenging and exciting. Furthermore, as it is all about emergence and complexity it addresses fundamental questions and produces concepts and methodologies that are not just relevant to making widgets but addressing important issues in a wide range of intellectual endeavors from biology to sociology.

Thursday, February 7, 2019

A critique of DFT calculations for spin crossover materials

A basic question concerning spin crossover compounds is what are the energy difference and entropy difference between the low spin (LS) and high spin (HS) states.


The relative magnitude of these two quantities determines the crossover temperature from the LS to HS state.
From experiment typical values of the energy difference Delta H are of the order of 1-5 kcal/mol (4-20 kJ/mol). Entropy differences are typically about 30-60 kJ/mol/K. (See table 1 in the Kepp paper below).
This relatively small difference in energy presents a challenge for computational quantum chemistry,
such as calculations based on density functional theory, because of the strong electron correlations associated with the transition metal ions,

Over the past few years some authors have done nice systematic studies of a wide range of compounds with a wide range of DFT exchange-correlation (XC) functionals. Here I will focus on two papers.

Benchmarking Density Functional Methods for Calculation of State Energies of First Row Spin-Crossover Molecules 
Jordi Cirera, Mireia Via-Nadal, and Eliseo Ruiz

Theoretical Study of Spin Crossover in 30 Iron Complexes 
Kasper P. Kepp

First, these studies are refreshing and important. Too many computational chemistry calculations are dubious because they do not do systematics. 
Here I will just discuss the first paper.

Cirera et al. use 8 different XC functionals to study 20 different compounds. They find that only one (!) functional (TPSSh) correctly gives a low spin ground state for all the compounds, i.e. Delta H is positive.

The figure below nicely summarises the results.

Before one gets too excited that one has now found the "right" functional, one should note that when one uses TPSSh to calculate the crossover temperature there is little correlation with the experimental values.

To put all this in a broader context consider the hierarchal figure below which is in the spirit of the metaphor of Jacob's ladder proposed by John Perdew. [The figure is from here]. However, I do not think Jacob's ladder is the best Biblical metaphor.


This highlights the ad hoc nature of DFT based calculations and that one is a long way from anything that should seriously be considered to be a true ab initio calculation.

It should also be noted that all these calculations are for a single molecule in vacuum. However, the experiments are in the solid state (or solution) and so the energetics can be shifted by electrostatic screening and/or solvation. The crossover temperature (which can become a first-order phase transition) may also be shifted by intermolecular elastic interactions.

Wednesday, February 6, 2019

Ideas worth throwing out?

Unfortunately, like many universities, UQ has become a construction site in the rush to build shiny new buildings, particularly to accommodate the ever increasing expansion of senior management and nice facilities to ``enhance the student experience.''
An extra floor was added to the physics building for the Office of the Executive Dean of Science.
Faculty and grad student offices are being shuffled around campus to accommodate this construction. I am now making my third move in less than eighteen months. I took this opportunity to downsize and toss a lot of old files. While filling a dumpster I saw something I thought was pretty ironic and funny.



Tuesday, February 5, 2019

What is condensed matter physics?

What do condensed matter physicists study?

High school students are often taught there are three states of matter: solids, liquids, and gases. However, this is misleading as there are many more states of matter. Liquid crystals, superconductors, and ferromagnets are distinct states of matter that do not fit in the high school classification. Condensed matter physics (CMP) is concerned with practically any material system that involves a large number (say at least a million) of interacting atoms or molecules. We can consider this to be a complex system because there are many different ways of arranging the constituents (atoms or molecules) of the system.

What approaches and techniques do condensed matter physicists use to study and understand these systems?

CMP provides a coherent intellectual framework for a multi-faceted approach to investigate and understand complex material systems.
First, one can look at the material at many different scales ranging from the microscopic level (scale of individual atoms and molecules) to the mesoscopic (roughly thousands of atoms or molecules, micrometer scale) to the macroscopic (what can be seen with the naked eye). The different scales can be different system sizes, length scales, energy scales, and time scales.
At every scale one can use different tools and approaches, which fall into three broad categories: experimental, theoretical, and computational. All three are intellectually and technically challenging. All are important.

Experiment
There are several distinct parts to this.
Synthesis and fabrication: one has to make a sample of the material. This involves chemistry. Making large clean samples is an art in itself.
Characterisation: this concerns testing that one actually has a sample of chemical composition and purity desired.
Property measurement: this concerns determining what the physical properties (for example, crystal structure or electrical resistance) of the sample are. Often one varies external conditions such as temperature, magnetic field, and pressure, and determines how the properties of interest vary with these parameters. Some of the most interesting condensed matter physics happens under extreme conditions: low temperatures, high magnetic fields, or high pressures.

Theory and model building
The fundamental question that one is trying to answer is: How do the material properties emerge from the chemical composition and atomic structure of the material? In particular, what are the physical mechanisms responsible for the different states of matter found in the material? In CMP it is found that these questions are best understood in terms of deciding on the essential system components and  physical interactions between them that occur at different length and energy scales. Constructing (or dreaming up!) the simplest possible model for these interactions is a real art.

Computation
This has several aspects often requiring the use of state-of-the-art supercomputers and algorithms. One is broadly known as quantum chemistry and concerns starting with a knowledge of the basic chemical composition and calculating from quantum theory the properties of the system. In spite of massive advances in computational power and algorithms over the past 60 years, one is still confined to relatively small numbers of atoms and/or unreliable approximation schemes.
The second computational side is calculating properties of the theoretical models that can be compared to experiment. Even for "simple" models usually requires either massive computational power on small systems or unreliable approximation schemes.

Finally, an important challenge is that of intellectual synthesis and critical evaluation. Here, one tries to bring together the results of all these complementary investigations to gain a coherent picture of the material and its properties. Inevitably, there are inconsistencies, sometimes minor and sometimes major. Investigators then have to decide in which element the problem lies.

I think CMP is more complex, challenging, and full of surprises than other areas of physics, such as atomic physics, elementary particle physics, fluid mechanics, and optics. There is a lot more that is unknown in CMP and a lot more that can go wrong.

Friday, February 1, 2019

My biggest questions about spin crossover compounds

Most of the questions are inter-related. Most have been discussed in earlier posts.

How do we tune physical properties (e.g. hysteresis width) by varying chemical composition?

How do we understand two-step transitions? Are they associated with spatially inhomogeneous arrangements of the spin?

Are spin ice phases possible?

What is the physical origin of the intermolecular interactions that lead to a first-order transition?
Is it electronic (magnetic) and/or elastic?
Are there long-range interactions? Are they crucial?

Is there a simple way to understand the change in vibrational spectra (and thus entropy) associated with the transition?

What is the role of spatial anisotropy?

What is the simplest possible effective model Hamiltonian that captures the physical properties above?
Can the elastic degrees of freedom be "integrated out" to give a "simple" Ising model?
How do the model parameters depend on structural and chemical composition?