Thursday, September 3, 2015

A transition in university values: from scholarship to money to status

It is hard to make meaningful or reliable generalisations about social trends in a complex world. But, I do want to try. In particular, I would like to suggest that the values that drive university decisions [e.g. about hiring, promotions, and allocation of resources] has shifted in the last twenty years. Here are some potted historical observations, based largely on Australian and US universities.

The scholarship era (roughly before the 1960s).
People were hired and promoted largely based on letters of reference that evaluated the scholarly contributions of the individual. The emphasis was on quality not quantity.
Student tuition was either affordable (in the USA) or non-existent (in Australia).
Most administrators were faculty (many on secondment, i.e. temporary) with distinguished scholarly records. The disparity between faculty and senior administrator salaries was small.
Departments across the university had roughly equal influence and status. In particular, the humanities [history, literature, philosophy....] were respected and valued.
The only people getting grants were those who really needed them and it was easy to get them. Research groups were small.

The money era (roughly the 70s to 90s)
This coincided with the rise of MBAs and neoliberalism.
The number of "research universities" dramatically increased. Australian universities received significant income from international students. Departments fought each other for EFTSUs [Effective Full Time Student Units] because that determined departmental income.
In the USA the total funding income of an individual had a significant effect on hiring, tenure, and promotion decisions. Publication rates and total "outputs" became important.
Administration became a [highly paid] career trajectory. Faculty became a minority among the university employees.
The internal influence of the humanities declined because they did not bring much money into the university. Science and engineering had much more clout.

The status era (the 21st century)
This coincided with a rise in metrics, rankings, and luxury journals.
All grants are no longer equal. Getting a grant is difficult and so just getting one is important for your "status" and career, even if you don't really need it, or the dollar amount is relatively small. Furthermore, some grants have a higher status than others, particularly those with low success rates. In Australia a Future Fellowship helps you get promoted and in the USA an NSF CAREER award helps you get tenure. It is not just the money. It is the status.
The Humanities have regained some status and influence because their faculty can win prizes, publish books with Oxford and Cambridge UP, or win "prestigious" fellowships.
I think basic science has also increased its influence and status.
[Personally, my career struggled in Australia in the 90s and took off after 2000 and I think this is largely due to an environmental transition not my own merits].
"High profile" faculty may not "pay their way" in terms of grant or student income, but they are perceived (arguably wrongly) to help climb the rankings. Faculty who teach large numbers of students [which generates significant income]  or get large $ industrial grants are appreciated less.

I freely acknowledge that scholarship, money, and status are not completely decoupled from one another. But, the question is which is dominant.

What do you think? Are these reasonable historical observations?

Wednesday, September 2, 2015

There is no metal-insulator transition in extremely large magnetoresistance materials

There is currently a lot of interest in layered materials with extremely large magnetoresistance [XMR], partly stimulated by a Nature paper last year.
The figure below shows the data from that paper, which is my main focus in this post.


A recent PRL contains the following paragraph

A striking feature of the XMR in WTe2 is the turn-on temperature behavior: in a fixed magnetic field above a certain critical value Hc, a turn-on temperature T is observed in the R(T) curve, where it exhibits a minimum at a field-dependent temperature T. At T<T, the resistance increases rapidly with decreasing temperature while at T>T, it decreases with temperature [2]. This turn-on temperature behavior, which is also observed in many other XMR materials such as graphite [19,20], bismuth [20]PtSn4 [21]PdCoO2 [22]NbSb2 [23], and NbP [24], is commonly attributed to a magnetic-field-driven metal-insulator transition and believed to be associated with the origin of the XMR [10,19,20,23,25].

My main point is that this temperature dependence and the "turn-on" has a very simple physical explanation: it is purely a result of the strong temperature dependence of the charge carrier mobility (scattering rate), which is reflected in the temperature dependence of the zero field resistance.
It is completely unnecessary to invoke a metal-insulator transition.
The "turn on" is really a smooth crossover.
I made this exact same point in a post last year about PdCoO2  and in this old paper.

Following the discussion [especially equation (1)] in the Nature paper, consider a semi-metal that has equal density of electrons and holes (n=p). For simplicity assume they have the same temperature dependent mobility mu(T). Then the total resistivity in a magnetic field B is given by
Differentiating this expression with respect to temperature T, for fixed B, one finds that the resistance is a minimum, at a temperature T* given by
Further justification for this point of view should come from a Kohler plot:
A plot of the ratio of the rho(T,B)/rho(T,B=0) versus B/rho(T,B=0) should be independent of temperature.

In the specific materials there will be further complications associated with spatial anisotropy, unequal and temperature dependent election and hole densities, tilted Weyl cones, chiral anomalies, .... However, the essential physics should be the same.

XMR is due to simple (boring old) physics: extremely large mobilities at low temperatures are due to very clean samples and in some cases, near perfect compensation of electron and hole densities.

Monday, August 31, 2015

Effective weekly group meetings

I think it is crucial that any research group have a group meeting at a designated time each week. It surprises me that some groups do not do this or how some people hate their group meetings to think they are a waste of time.
A typical group consists of one to a few faculty members and postdocs, grad. students, and possibly some undergrads working in the group.

It is important that these weekly meetings are informal, relaxed and inclusive.
They should encourage learning, interaction and feedback.

What might happen at the meeting?
Here are a few things that we do in the Condensed Matter Theory group at UQ [senior faculty are Ben Powell and I].
These meetings are compulsory and they are in addition to a weekly meeting between each individual in the group and their supervisor.
Each week a group member is assigned to bring a cake or a packet of cookies/biscuits to share with the group. Group members provide their own drinks.
  • A group member gives a talk on the white board about something they are currently working on or a tutorial on a specific subject. Questions, particularly basic ones from students, are encouraged during the talk. Powerpoint is only allowed when essential for graphs of results.
  • A group member gives a practise talk for an upcoming speaking engagement, whether for a seminar, conference, or a Ph.D progress report. Detailed feedback is given, both about scientific content and presentation, afterwards.
  • Everyone in the group brings a paper they recently read and each has 7 minutes to convince the audience they should also read the paper.
  • Journal club. Everyone reads a pre-assigned paper and it is discussed in detail.
We try not to go over an hour.
Once for a few months we ran a competition each week to see who could ask the most questions during the talk. The winner got a bottle of wine; but had to provide the prize for the following week.

Some large groups do need to have regular meetings to discuss issues such as lab maintenance, software development, .... I think it is important that these mundane but important issues not crowd out science and so they should be held separately or fenced off to a second hour.

We have also had more specialised sub-group meetings that run for a limited time such as the following.
  • Reading groups. Working through a specific book, one chapter per week, e.g. those by Fulde, Phillips, and Hewson.
  • Student only meetings. This gives them a greater freedom to teach each other and to ask basic questions.
  • Joint theory-experiment or chemistry-physics meetings, usually focused on a specific collaboration.
Our meetings are far from perfect. But I think the most important thing is to keep meeting and experiment with different formats.

What is your experience?
What do you think has contributed to the most effective and enjoyable meetings?

Saturday, August 29, 2015

Basic, bold, and boring: a claim about enzyme mechanism

This post concerns a basic but bold claim about the effects of a protein or solvent environment on chemical structure and reactivity. I am not clear on how original or how radical or controversial the claim is. In some sense, I think it is largely consistent with what Ariel Warshel has being saying for a long time. [See here for example].

I would be interested to hear feedback on the claim.

Consider some chemical reaction

A + B to C + D

One can consider the reaction is the gas phase [i.e. without an environment] or in a solvent [polar or non-polar] or in a protein environment. The relative stability of the reactants and the products, the rate of the reaction, and the reaction mechanism [i.e. the reaction co-ordinate and transition state geometry] can vary significantly and dramatically.
This is what is amazing about enzymes: you can increase the rate of a reaction by a factor of a billion.

So what is the most basic hypothesis about the effect of the environment? It can do two significant things.

1. The bond lengths of A and/or B and/or their relative geometry is changed by the environment. For example, A and B are forced closer together.

2. A polar environment [e.g. water or a protein] can change the relative energies of the transition state, and/or the reactants and products. This is highly likely because most molecules have non-uniform charge distributions and significant dipole moments.

This claim has a natural understanding in terms of a simple diabatic state picture for the reaction. The environment can change the shapes of the diabatic potential energy surfaces and/or change the strength of the coupling of the two surfaces. [For example, in the figure below replace "incorrect" with "no environment" and "correct" with "environment"].


Why is this claim boring?
Well it means that there is nothing that "special" or unique about proteins. There is no new physics or chemistry.
It rules out exotic mechanisms such as dynamic effects and particularly collective quantum effects.

Finding out how an environment does change specific parameters is highly non-trivial.

Furthermore, outstanding, fascinating, and difficult problems remain understanding and describing:

  • how the protein "engineers" the changes in the reaction potential energy surface,
  • how mutations distant from the "reaction centre" can sometimes have such a significant effect,  
  • the role of the hundreds of amino acids not close to the "reaction centre", i.e. why do proteins need be so big? is there a lot of redundancy? or does one really need all those amino acids to produce a highly "tuned" and exquisite tertiary structure?
So is this claim "controversial" or "dogma" or "obvious"? 

Thursday, August 27, 2015

Conical intersections vs. Dirac cones, Chemistry vs. Physics: Similarities and differences

Conical intersections between potential energy surfaces get a lot of attention in the theoretical chemistry of electronic excited states (photochemistry) of molecules, particularly with regard as a mechanism for ultrafast (i.e. sub picosecond) non-radiative decay. The surfaces are functions of the spatial coordinates R=(x1,x2, ....) of the nuclei in the molecule.
In the past decade the (hard) condensed matter physics community has become obsessed(?) with Dirac cones [graphene, topological insulators, Weyl semimetals, ...]. They occur in the electronic band structure [one-electron spectrum] when two energy bands cross. Here the system has spatial periodicity and the k's are Bloch quantum numbers.
I want to highlight some similarities between conical intersections (CIs) and Dirac cones (DCs) but also highlight some important differences.

First the similarities.

A. Both CIs and DCs give rise to rich (topological) quantum physics associated with a geometric phase and the associated gauge field (a fictitious magnetic field), the Berry curvature, monopoles, ....

B. There are definitive experimental signatures associated with this Berry phase. However, obtaining actual experimental evidence is very nebulous. For example, this paper discusses the problem for extracting the Berry phase from quantum oscillations in a topological insulator. This post discusses the elusive experimental evidence for CIs.

C. A history of under appreciation. Both these concepts could have been elucidated in the 1930s, but were either ignored, or thought to be pathological or highly unlikely. CIs occur in the Jahn-Teller effect (1937) in systems with enough symmetry (e.g. C_s) to produce degenerate electronic states.
However, then people made the mistake of assuming that symmetry was a necessary, rather than a sufficient, condition for a CI. Given that most molecules, particularly large ones have little or no symmetry, it was assumed CIs were unlikely. It was not until the 1980s, with the rise of high-level computational quantum chemistry and femtosecond laser spectroscopy, that people discovered that symmetry was not only unnecessary, but CIs are quite ubiquitous in large molecules. This is facilitated by the large number of nuclear co-ordinates.
DCs have only become all the rage over the past decade because of new materials: graphene, topological insulators, ...

In spite of the similarities above it is important to appreciate some significant differences in the physics associated with these entities.

1. The role of symmetry. As a minimum DCs requires translational symmetry and an infinite system to ensure the existence of a Bloch wave vector. Most require further symmetries, e.g. the sub-lattice in graphene, or something else in a topological insulator. As mentioned, above, CIs don't involve any translational symmetry. One does not even need some local symmetry (e.g. C_3) as observed with some common structural motifs for CIs.

2. Good quantum numbers and quantum evolution. For DCs the Bloch wave vector k is a good quantum number. In the absence of scattering an electron in state k will stay there forever. For CIs R is a classical nuclear co-ordinate. If one starts on a particular surface one will "slide down" the surface and pass through the CI.

3. The role of correlations. DCs are generally associated with a band structure, an essentially one-electron picture. [Strictly, one could look at poles in spectral functions in a many-body system but that is not what one generally does here]. In contrast, CIs are associated with quantum many-body states not single electron states. In particular, although one can in principle have CIs associated with molecular orbital energies and find them with Hartree-Fock methods, in general one usually finds them with multi-reference [i.e. multiple Slater determinant] methods. For a nice clear discussion see this classic paper which explains everything in terms of what physicists would call a two-site extended Hubbard model.

4. Occupation of quantum states. For DCs one is generally dealing with a metal where all the k states below the Fermi energy are occupied. For CIs only one of the R's is "occupied".

The post was stimulated by Ben Levine and Peter Armitage.

Tuesday, August 25, 2015

Advice to boy and girl wonders

At every institution occasionally an absolutely brilliant young student comes on the scene. A number of colleagues have pointed out to me that it seems that often these students are in a "race". They [or their parents] have some goal like:

Get an undergraduate degree before most people graduate from high school.

Be the youngest person ever to get a Ph.D from university X.

Become a faculty member before they are 23.

Be the youngest person to ever to get tenure at university Y.

Sheldon Cooper from the Big Bang Theory embodies this. He endlessly reminds his friends that he graduated from college when he was 14 and was the youngest person at the time to receive the Stevenson award.

This big rush is a mistake on several grounds. Overall it reduces enjoyment, deep learning, and substantial achievement. It also increases stress. Furthermore, promising students slow down as they go up the academic ladder. In most countries, the clock really does start clicking at the Ph.D award date. i.e. eligibility for certain awards is determined by years since the Ph.D. Don't rush to get there.

This cautionary view is supported by a great child prodigy, the mathematician Terry Tao. In a recent profile piece in the New York Times Magazine, it says
Tao now believes that his younger self, the prodigy who wowed the math world, wasn’t truly doing math at all. ‘‘It’s as if your only experience with music were practicing scales or learning music theory,’’ .....  ‘‘I didn’t learn the deeper meaning of the subject until much later.’’
Aside: this is very nice article that is worth reading. It is really encouraging to see a substantial article about a scientist or academic in the popular press that really engages with content of their work and does not sensationalise the person.

Monday, August 24, 2015

Seeking definitive experimental signatures of a Weyl semimetal

Weyl and Dirac semimetals are getting quite a bit of attention. Part of this interest is because of the possible solid state realisation of the chiral anomaly from quantum field theory. One proposed signature is negative longitudinal magnetoresistance. A different, arguably more definitive, signature is in the following paper.

Quantum oscillations from surface Fermi arcs in Weyl and Dirac semimetals 
Andrew C. Potter, Itamar Kimchi, and Ashvin Vishwanath

In a thin slab of material there are "Fermi arc" states on the top and bottom surfaces. When a magnetic field is applied perpendicular to the slab, there are unusual closed orbits (shown below) where an electron can move around the arc on the  top surface, tunnel via a bulk chiral state to the bottom surface, move around the arc on the bottom surface, and then tunnel back to the top surface.

The resulting Shubnikov de Haas oscillations have some unique signatures such as the periodicity and the dependence of the phase of the oscillations on the thickness of the sample.

There is a very nice set of experiments to test these ideas.

Chirality transfer dynamics in quantum orbits in the Dirac semi-metal Cd3As2 
Philip J.W. Moll, Nityan L. Nair, Tony Helm, Andrew C. Potter, Itamar Kimchi, Ashvin Vishwanath, James G. Analytis
The main finding of this study is directly evident in the raw data: while parallel [magnetic] fields lead to a single SdH frequency , an additional higher frequency component Fermi surface associated with the surface oscillations appears for fields perpendicular to the surface. This high frequency is clearly distinguishable from higher harmonics of the low frequency Fermi surface.
Focused Ion Beams were used to prepare samples with different geometries, rectangular and triangular, shown below. In the latter the oscillations associated with chirality are washed out by destructive interference due to the dependence of phase on the slab thickness.
Aside: In Figure 3 they show experimental signatures of hydrodynamic flow.

I thank James Analytis for helpful discussions about this work.

There is a also very nice theory paper.
Axial anomaly and longitudinal magnetoresistance of a generic three dimensional metal 
 Pallab Goswami, J. H. Pixley, S. Das Sarma

It is quite pedagogical, comprehensive in scope, and contains some important new insights. One particularly significant one is that one can get negative magnetoresistance without a Weyl or Dirac metal. Furthermore, in a system with a cylindrical Fermi surface, near the Yamaji angles [normally associated with semi-classical Angle Dependent Magnetoresistance Oscillations (AMRO)] one can have only one partially full Landau level, leading to negative longitudinal magnetoresistance.

Hopefully once I have digested this paper  more I will write something. I am particularly curious as to whether this theory can explain the unusual angle-dependent interlayer magnetoresistance seen in a diverse set of strongly correlated electron metals.