Monday, August 31, 2015

Effective weekly group meetings

I think it is crucial that any research group have a group meeting at a designated time each week. It surprises me that some groups do not do this or how some people hate their group meetings to think they are a waste of time.
A typical group consists of one to a few faculty members and postdocs, grad. students, and possibly some undergrads working in the group.

It is important that these weekly meetings are informal, relaxed and inclusive.
They should encourage learning, interaction and feedback.

What might happen at the meeting?
Here are a few things that we do in the Condensed Matter Theory group at UQ [senior faculty are Ben Powell and I].
These meetings are compulsory and they are in addition to a weekly meeting between each individual in the group and their supervisor.
Each week a group member is assigned to bring a cake or a packet of cookies/biscuits to share with the group. Group members provide their own drinks.
  • A group member gives a talk on the white board about something they are currently working on or a tutorial on a specific subject. Questions, particularly basic ones from students, are encouraged during the talk. Powerpoint is only allowed when essential for graphs of results.
  • A group member gives a practise talk for an upcoming speaking engagement, whether for a seminar, conference, or a Ph.D progress report. Detailed feedback is given, both about scientific content and presentation, afterwards.
  • Everyone in the group brings a paper they recently read and each has 7 minutes to convince the audience they should also read the paper.
  • Journal club. Everyone reads a pre-assigned paper and it is discussed in detail.
We try not to go over an hour.
Once for a few months we ran a competition each week to see who could ask the most questions during the talk. The winner got a bottle of wine; but had to provide the prize for the following week.

Some large groups do need to have regular meetings to discuss issues such as lab maintenance, software development, .... I think it is important that these mundane but important issues not crowd out science and so they should be held separately or fenced off to a second hour.

We have also had more specialised sub-group meetings that run for a limited time such as the following.
  • Reading groups. Working through a specific book, one chapter per week, e.g. those by Fulde, Phillips, and Hewson.
  • Student only meetings. This gives them a greater freedom to teach each other and to ask basic questions.
  • Joint theory-experiment or chemistry-physics meetings, usually focused on a specific collaboration.
Our meetings are far from perfect. But I think the most important thing is to keep meeting and experiment with different formats.

What is your experience?
What do you think has contributed to the most effective and enjoyable meetings?

Saturday, August 29, 2015

Basic, bold, and boring: a claim about enzyme mechanism

This post concerns a basic but bold claim about the effects of a protein or solvent environment on chemical structure and reactivity. I am not clear on how original or how radical or controversial the claim is. In some sense, I think it is largely consistent with what Ariel Warshel has being saying for a long time. [See here for example].

I would be interested to hear feedback on the claim.

Consider some chemical reaction

A + B to C + D

One can consider the reaction is the gas phase [i.e. without an environment] or in a solvent [polar or non-polar] or in a protein environment. The relative stability of the reactants and the products, the rate of the reaction, and the reaction mechanism [i.e. the reaction co-ordinate and transition state geometry] can vary significantly and dramatically.
This is what is amazing about enzymes: you can increase the rate of a reaction by a factor of a billion.

So what is the most basic hypothesis about the effect of the environment? It can do two significant things.

1. The bond lengths of A and/or B and/or their relative geometry is changed by the environment. For example, A and B are forced closer together.

2. A polar environment [e.g. water or a protein] can change the relative energies of the transition state, and/or the reactants and products. This is highly likely because most molecules have non-uniform charge distributions and significant dipole moments.

This claim has a natural understanding in terms of a simple diabatic state picture for the reaction. The environment can change the shapes of the diabatic potential energy surfaces and/or change the strength of the coupling of the two surfaces. [For example, in the figure below replace "incorrect" with "no environment" and "correct" with "environment"].


Why is this claim boring?
Well it means that there is nothing that "special" or unique about proteins. There is no new physics or chemistry.
It rules out exotic mechanisms such as dynamic effects and particularly collective quantum effects.

Finding out how an environment does change specific parameters is highly non-trivial.

Furthermore, outstanding, fascinating, and difficult problems remain understanding and describing:

  • how the protein "engineers" the changes in the reaction potential energy surface,
  • how mutations distant from the "reaction centre" can sometimes have such a significant effect,  
  • the role of the hundreds of amino acids not close to the "reaction centre", i.e. why do proteins need be so big? is there a lot of redundancy? or does one really need all those amino acids to produce a highly "tuned" and exquisite tertiary structure?
So is this claim "controversial" or "dogma" or "obvious"? 

Thursday, August 27, 2015

Conical intersections vs. Dirac cones, Chemistry vs. Physics: Similarities and differences

Conical intersections between potential energy surfaces get a lot of attention in the theoretical chemistry of electronic excited states (photochemistry) of molecules, particularly with regard as a mechanism for ultrafast (i.e. sub picosecond) non-radiative decay. The surfaces are functions of the spatial coordinates R=(x1,x2, ....) of the nuclei in the molecule.
In the past decade the (hard) condensed matter physics community has become obsessed(?) with Dirac cones [graphene, topological insulators, Weyl semimetals, ...]. They occur in the electronic band structure [one-electron spectrum] when two energy bands cross. Here the system has spatial periodicity and the k's are Bloch quantum numbers.
I want to highlight some similarities between conical intersections (CIs) and Dirac cones (DCs) but also highlight some important differences.

First the similarities.

A. Both CIs and DCs give rise to rich (topological) quantum physics associated with a geometric phase and the associated gauge field (a fictitious magnetic field), the Berry curvature, monopoles, ....

B. There are definitive experimental signatures associated with this Berry phase. However, obtaining actual experimental evidence is very nebulous. For example, this paper discusses the problem for extracting the Berry phase from quantum oscillations in a topological insulator. This post discusses the elusive experimental evidence for CIs.

C. A history of under appreciation. Both these concepts could have been elucidated in the 1930s, but were either ignored, or thought to be pathological or highly unlikely. CIs occur in the Jahn-Teller effect (1937) in systems with enough symmetry (e.g. C_s) to produce degenerate electronic states.
However, then people made the mistake of assuming that symmetry was a necessary, rather than a sufficient, condition for a CI. Given that most molecules, particularly large ones have little or no symmetry, it was assumed CIs were unlikely. It was not until the 1980s, with the rise of high-level computational quantum chemistry and femtosecond laser spectroscopy, that people discovered that symmetry was not only unnecessary, but CIs are quite ubiquitous in large molecules. This is facilitated by the large number of nuclear co-ordinates.
DCs have only become all the rage over the past decade because of new materials: graphene, topological insulators, ...

In spite of the similarities above it is important to appreciate some significant differences in the physics associated with these entities.

1. The role of symmetry. As a minimum DCs requires translational symmetry and an infinite system to ensure the existence of a Bloch wave vector. Most require further symmetries, e.g. the sub-lattice in graphene, or something else in a topological insulator. As mentioned, above, CIs don't involve any translational symmetry. One does not even need some local symmetry (e.g. C_3) as observed with some common structural motifs for CIs.

2. Good quantum numbers and quantum evolution. For DCs the Bloch wave vector k is a good quantum number. In the absence of scattering an electron in state k will stay there forever. For CIs R is a classical nuclear co-ordinate. If one starts on a particular surface one will "slide down" the surface and pass through the CI.

3. The role of correlations. DCs are generally associated with a band structure, an essentially one-electron picture. [Strictly, one could look at poles in spectral functions in a many-body system but that is not what one generally does here]. In contrast, CIs are associated with quantum many-body states not single electron states. In particular, although one can in principle have CIs associated with molecular orbital energies and find them with Hartree-Fock methods, in general one usually finds them with multi-reference [i.e. multiple Slater determinant] methods. For a nice clear discussion see this classic paper which explains everything in terms of what physicists would call a two-site extended Hubbard model.

4. Occupation of quantum states. For DCs one is generally dealing with a metal where all the k states below the Fermi energy are occupied. For CIs only one of the R's is "occupied".

The post was stimulated by Ben Levine and Peter Armitage.

Tuesday, August 25, 2015

Advice to boy and girl wonders

At every institution occasionally an absolutely brilliant young student comes on the scene. A number of colleagues have pointed out to me that it seems that often these students are in a "race". They [or their parents] have some goal like:

Get an undergraduate degree before most people graduate from high school.

Be the youngest person ever to get a Ph.D from university X.

Become a faculty member before they are 23.

Be the youngest person to ever to get tenure at university Y.

Sheldon Cooper from the Big Bang Theory embodies this. He endlessly reminds his friends that he graduated from college when he was 14 and was the youngest person at the time to receive the Stevenson award.

This big rush is a mistake on several grounds. Overall it reduces enjoyment, deep learning, and substantial achievement. It also increases stress. Furthermore, promising students slow down as they go up the academic ladder. In most countries, the clock really does start clicking at the Ph.D award date. i.e. eligibility for certain awards is determined by years since the Ph.D. Don't rush to get there.

This cautionary view is supported by a great child prodigy, the mathematician Terry Tao. In a recent profile piece in the New York Times Magazine, it says
Tao now believes that his younger self, the prodigy who wowed the math world, wasn’t truly doing math at all. ‘‘It’s as if your only experience with music were practicing scales or learning music theory,’’ .....  ‘‘I didn’t learn the deeper meaning of the subject until much later.’’
Aside: this is very nice article that is worth reading. It is really encouraging to see a substantial article about a scientist or academic in the popular press that really engages with content of their work and does not sensationalise the person.

Monday, August 24, 2015

Seeking definitive experimental signatures of a Weyl semimetal

Weyl and Dirac semimetals are getting quite a bit of attention. Part of this interest is because of the possible solid state realisation of the chiral anomaly from quantum field theory. One proposed signature is negative longitudinal magnetoresistance. A different, arguably more definitive, signature is in the following paper.

Quantum oscillations from surface Fermi arcs in Weyl and Dirac semimetals 
Andrew C. Potter, Itamar Kimchi, and Ashvin Vishwanath

In a thin slab of material there are "Fermi arc" states on the top and bottom surfaces. When a magnetic field is applied perpendicular to the slab, there are unusual closed orbits (shown below) where an electron can move around the arc on the  top surface, tunnel via a bulk chiral state to the bottom surface, move around the arc on the bottom surface, and then tunnel back to the top surface.

The resulting Shubnikov de Haas oscillations have some unique signatures such as the periodicity and the dependence of the phase of the oscillations on the thickness of the sample.

There is a very nice set of experiments to test these ideas.

Chirality transfer dynamics in quantum orbits in the Dirac semi-metal Cd3As2 
Philip J.W. Moll, Nityan L. Nair, Tony Helm, Andrew C. Potter, Itamar Kimchi, Ashvin Vishwanath, James G. Analytis
The main finding of this study is directly evident in the raw data: while parallel [magnetic] fields lead to a single SdH frequency , an additional higher frequency component Fermi surface associated with the surface oscillations appears for fields perpendicular to the surface. This high frequency is clearly distinguishable from higher harmonics of the low frequency Fermi surface.
Focused Ion Beams were used to prepare samples with different geometries, rectangular and triangular, shown below. In the latter the oscillations associated with chirality are washed out by destructive interference due to the dependence of phase on the slab thickness.
Aside: In Figure 3 they show experimental signatures of hydrodynamic flow.

I thank James Analytis for helpful discussions about this work.

There is a also very nice theory paper.
Axial anomaly and longitudinal magnetoresistance of a generic three dimensional metal 
 Pallab Goswami, J. H. Pixley, S. Das Sarma

It is quite pedagogical, comprehensive in scope, and contains some important new insights. One particularly significant one is that one can get negative magnetoresistance without a Weyl or Dirac metal. Furthermore, in a system with a cylindrical Fermi surface, near the Yamaji angles [normally associated with semi-classical Angle Dependent Magnetoresistance Oscillations (AMRO)] one can have only one partially full Landau level, leading to negative longitudinal magnetoresistance.

Hopefully once I have digested this paper  more I will write something. I am particularly curious as to whether this theory can explain the unusual angle-dependent interlayer magnetoresistance seen in a diverse set of strongly correlated electron metals.

Friday, August 21, 2015

I was wrong about impact factors

Previously, I stated  that "The only value I see in Impact Factors is helping librarians compile draft lists of journals to cancel subscriptions to in order to save money."

Now I don't even think Impact factors are good for that!

This was brought home last week when the UQ library announced that because of the declining Australian dollar that it has to save A$1M and so would be cancelling some journal subscriptions. A proposed "cull" list has been circulated. The exact selection criteria used are not clearly defined but impact factor is stated as one.
Guess what? More Mathematics journals are slated for cancellation than in any other field! 
This is hardly surprising because the average number of citations per article in Mathematics is one third of that in Physics and Chemistry. Thus, IF's for maths journals will be typically smaller by a factor of three.

Also on the cancellation list is the American Journal of Physics and The Physics Teacher. It turns out the former has an impact factor less than one. (I could not find IF for the latter).
This is hardly surprising because Am. J. Phys. largely comprises pedagogical articles that faculty can refer an advanced undergraduate or a beginning graduate student to. [They are also good for faculty who want to learn new things or obtain a deeper understanding of basics.] Such articles are extremely valuable, particularly for an institution that aims to do a good job of engaging undergraduates in research and improving pedagogy. However, they are unlikely to be cited much because they don't contain anything "new" and are not review articles.

Another issue that this journal cancellation exercise highlights is the pernicious practice of "bundling" that Elsevier and some other commercial publishers use. It is all or nothing. Institutions are forced to subscribe to a whole bunch of crap journals in order to get some decent ones.

Wednesday, August 19, 2015

Crystal structure transitions induced by isotopic substitution

At the level of the Born-Oppenheimer approximation replacing hydrogen with deuterium in a molecule or crystal should not change anything. The "chemical forces" responsible for all types of bonding, and encoded in a potential energy surface, remain the same. However, in reality changes can occur such as geometric isotope effects. This is because the zero-point energy associated with hydrogen bonds changes. The essential physics is described here.

In molecular crystals one can see not just small quantitative changes, such as changes in bond lengths of the order of a few hundredths of an Angstrom, but actual changes of the geometric arrangements of the molecules in the crystal. This "isotopic polymorphism" is nicely reviewed in a recent article by Klaus Merz and Anna Kupka.

A specific example is pyridine. The H and D polymorphs are shown below and taken from here. Note, the hydrogen bonds involved are relatively weak C-H...N bonds.


Why does this sensitivity to H/D matter in a broader context?

1. Understanding and calculating the relative stability of different possible crystal structures for organic molecular crystals represents a formidable theoretical challenge. This shows that one needs to have an accurate calculation of the relative zero-point energies of the competing structures, making the challenge even greater.

2. As I posted before, an intriguing and outstanding problem concerning superconducting organic charge transfer salts is how H/D substitution allows one to tune between Mott insulating and superconducting states.

3. Protons matter in molecular biology! Yet one cannot "see" them with X-ray crystallography. The alternative, which is increasing in viability and power, is neutron crystallography. However, this usually means replacing the hydrogens with deuterium. But, this means that the structure one determines is not necessarily the native structure. In many situations, the differences are probably small. However, in situations with short hydrogen bonds [see e.g. here] the difference could be significant.

Tuesday, August 18, 2015

The large electronic entropy of bad metals

A common feature of bad metals is that at relatively low temperatures [of the order of the coherence temperature which is much less than the non-interacting Fermi temperature] they have an entropy per electron of the order of Boltzmann's constant, k_B. This is more characteristic of a classical than a quantum system. For localised non-interacting spins the entropy is ln(2) k_B. In contrast, in a Fermi liquid such as an elemental metal, the electronic entropy is of order k_B T/T_F where T is the temperature and T_F is the Fermi temperature (10,000s K in an elemental metal).

I don't think this bad metal property of the large electronic entropy is emphasised enough, although it was highlighted here.

I illustrate this below with two sets of experimental data. The first set is measurements on YBCO, with x related to the doping, small x corresponding to the under doped regime and x=1 approximately optimal doping.
In the metallic state, the entropy increases approximately linearly with temperature and has a value of the order of 0.6 k_B at T=300 K.

Aside: previously I discussed how the entropy is maximal at optimal doping and this is reflected in a change in sign of the thermopower.

The second is from the same research group (20 years later) concerning a family of iron pnictides
Electronic specific heat of Ba1−xKxFe2As2 from 2 to 380 K
G Storey, J W Loram, J R Cooper, Z Bukowski, and J Karpinski
Note that here the entropy is divided by the temperature, so in a simple Fermi liquid the graph would be flat. For reference, the value of S/T [and the specific heat] calculated from DFT-based band structure calculations is about 10 mJ/mol/K^2. Hence, we see how strong correlations enhance the entropy. Furthermore, S ~ ln(2) R ~ 50 mJ/mol/K for temperatures of the order of 100 K for x > 0.4.

Monday, August 17, 2015

Cherry picking theories

Cherry picking data is not just done by scientific "denialists" but also some "respected" theorists who are seeking support for their scientific theory.

I recently realised that some experimentalists cherry pick theories to describe their experimental data. I heard a talk by a theorist who reported having several disturbing conversations along the following lines.

Experimentalist: We fitted our data to your theory.

Theorist: But the theory is not valid or relevant in the parameter regime of your experiment.

Experimentalist: We don't care. The theory fits the data.

Friday, August 14, 2015

New proposals to measure the shear viscosity of an electron fluid

Recently, several new approaches have been suggested to experimentally measure the viscosity of the electron fluid in a metallic crystal. Previously, I posted about how ultrasound attenuation can be used to indirectly measure the viscosity. However, that method is arguably not sensitive enough for the small viscosities [of the order  n hbar, where n is the electron density] that are of particular relevance to possible quantum limits to the viscosity.

Forcella, Zaanen, Valentinis, and van Der Marel considered electromagnetic properties of viscous charged fluids, finding new possible signatures due to the viscosity such as negative refractive index, a frequency dependent peak in the reflection coefficient, and a strong frequency dependence of the phase. However, they note that these effects may be difficult to observe for viscosities of the order
of the quantum limit, n hbar.

Tomadin, Vignale, and Polini considered a two-dimensional electron fluid in a Corbino disk device
in the presence of an oscillating magnetic flux. They showed that the viscosity could be determined from the dc potential difference that arises between the inner and the outer edge of the disk.  In particular, for viscosities of the order of n hbar, the potential difference varied significantly  oscillation frequencies in the MHz range.

Levitov and Falkovich recently considered the flow of an electron fluid in a micrometer scale channel in the hydrodynamic regime, where the electron-electron collision rate is much larger than the momentum relaxation rate. They found that when the viscosity to resistance ratio is  sufficiently large viscous flow occurs producing vorticity and a negative nonlocal voltage. [See the figure below].
Spatially resolved measurements of the voltage allow determination of the magnitude of the viscosity.

Torre, Tomadin, Geim, and Polini  considered the electron liquid in graphene in the hydrodynamic regime and showed that the shear viscosity could be determined from measurements of non-local resistances in multi-terminal Hall bar devices.

Although these last three proposals are promising for the two-dimensional electron fluids in graphene and semiconductor heterostructures fabrication of the relevant micron-scale devices will be particularly challenging for bad metals such as cuprates and organic charge transfer salts.

Thursday, August 13, 2015

Reflecting on student teaching evaluations

I recently received my student evaluations for teaching last semester. I was pleased to see that the scores were very high and students made many positive comments about my teaching and the course.
I would like to think this is due to my brilliant performance. But it is not.
This years positive results are in contrast to several years ago when students in the same course were so unhappy that they met with the head of department to complain about me and the course. That year many students failed. This year more than half the class got the highest grade possible.

What brought about this dramatic change?
What did I do?
Actually, virtually nothing! The course content and difficulty is the same. The assignments and exams are basically identical, as they have been for the past decade. I did minor fine tuning to my lectures, as I always do, and to the assessment mix. Students also do a pre-test to check prior knowledge and the tutorials are more student led.

The real significant change is the students. From year to year a small class of 5-15 is prone to significant statistical fluctuations in student quality and attitude.
Furthermore, I believe that the ethos and atmosphere in such a small class can be significantly shaped by a few individuals with strong personalities.
Positive attitudes such as hard work, interest, politeness, curiosity, enthusiasm, humility, punctuality, diligence, ..... affect others.
Similarly, negative attitudes such as laziness, boredom, rudeness, arrogance, lateness, whining, a sense of entitlement, ...  can sour a class.

I think the main difference in the student evaluations and their results reflect not my performance or the quality or difficulty of the class but the composition of the class.
I post this because this dependence on student quality seems to be rarely considered when the teaching of faculty is evaluated, particularly by administrators.

Teaching and learning is a two way street.

Wednesday, August 12, 2015

Shear viscosity: from dilute gases to dense liquids

I have received a lot of helpful feedback on a recent paper about shear viscosity in strongly interacting quantum fermion fluids. As a result I have learnt some interesting things that I will post about. Here is the first one.

The shear viscosity can be written in terms of a Kubo formula which is an unequal time correlation function of the stress energy tensor.
In a general fluid there are two terms in the stress energy tensor: one associated with the kinetic energy and the second with the interparticle interaction. In dense classical liquids the term in the Kubo formula due to the interaction term dominates and are associated with the Einstein-Stokes relation where the viscosity is inversely proportional to the particle self-diffusion constant.

In contrast, in dilute gases and fluids the kinetic term dominates and the shear viscosity scales with the diffusion constant and scattering time. The crossover from the dilute to the dense case in a classical fluid is discussed here.

The case of the dilute classical gas is of particular historical interest. The viscosity scales with the density and the mean-free path. In a dilute gas the mean free path is inversely proportional to the density and the molecular cross section. This means that the viscosity is independent of the density (and pressure at fixed temperature). When Maxwell obtained this theoretical result from kinetic theory he found it so surprising that he tested it experimentally. According to this site,
In the attic of his house in Kensington, with the help of his wife, he carried out experimental measurements of gas viscosities in order to confirm the conclusions he had drawn about the effects of pressure and temperature. Many of these experiments were made between 51 °F (10.6 °C) and 74 °F (23.3 °C), and it appears that these temperatures were obtained simply by changing the temperature of the attic! This was arranged by Mrs. Maxwell, who organized the appropriate stoking of the fire. Some work was also done at 185 °F (85 °C), and this temperature was achieved by a suitably directed current of steam.
The results are described in this 1866 paper.

For a zero-range interaction, as in the unitary Fermi gas (and presumably the Hubbard model), it can be shown that the potential term does not contribute to the shear viscosity. For a succinct discussion of these issues and relevant references see the section of this paper that I reproduce below. I thank Thomas Schafer for pointing this out to me.

Monday, August 10, 2015

Climate change action at the grass roots

When I was recently visiting my mother-in-law in Anacortes, Washington she took my wife and I to a meeting of the local chapter of Transition, an international grass roots movement responding to climate change.
First, an employee of a local not-for-profit Sustainable Connections spoke briefly about home energy efficiency audits that they organise.
Then there was an interesting talk from a local climate change researcher, Roger Fuller, that focussed on the potential impact of climate change on surrounding Skagit County. It is somewhat unique because much the water flowing through the county comes from glacial snow melt in the nearby Cascade mountains. Increased temperatures will mean a greater rain/snow ratio, and greater river flow in the winter and less in the spring. This could have significant effects on the frequency of extreme flooding events.

I think these local initiatives are particularly important beyond the immediate concrete [but modest] energy savings and reduced CO2 emissions that they produce. Such initiatives provide models for wider more ambitious programs and show politicians and policy makers that some people are concerned about climate change and willing to make life style changes.

Besides the significant benefit of addressing climate change this initiative has the other benefit of building community in the face of rapidly declining social capital.

I also picked up a copy of the excellent free booklet Climate Change: Evidence, Impacts, and Choices, produced by the National Research Council for the general public.

Anacortes is one of 50 communities [with a population between 5,000 and 250,000] in the USA that are competing for the $5 million Georgetown University Energy Prize. Each community tries to cut its energy consumption by as much as possible in 2015.

Thursday, August 6, 2015

Research environment is over-rated

For assessing grant applications in Australia, and some other countries, one criteria is "research environment". This means different things to different people. Unfortunately, I too often see both applicants and assessors/referees using this criteria in an unhelpful and/or meaningless way.

When does the environment of the proposed research project matter?
Here are a few ways, listed in order of decreasing importance.

Access to crucial equipment, materials, and infrastructure.
For example, if the project involves femtosecond laser spectroscopy, then there is little point if the researchers do not have access to the relevant lasers, probably at their own institution.
Similarly, neutron scattering requires relevant beam time on a user facility. Experimental studies of strongly correlated electron materials require access to high quality single crystals of the relevant materials. Large scale computational chemistry requires access to the relevant supercomputing facilities.

Access to intellectual resources.
Colleagues with relevant technical expertise may enhance the project. Also, a theory (experimental) project can be enhanced by the local presence of an experimental (theory) group that is actively interested in similar problems and systems.

A lively and interactive department with a history of fostering new collaborations.

What are debatable measures of research environment?
The overall "ranking" of the institution that will host the grant. Previously, I posted about a study that investigated whether moving to a more highly ranked institution improved research quality [as measured by citations].
The "ranking" of the host department in some silly national "research quality assessment" exercise.
The geographic proximity of "high profile" research groups or "big fancy shiny buildings" with $M budgets in vaguely related research areas.

What do you think?
How does research environment help research quality?

Tuesday, August 4, 2015

Searching for conical intersections for singlet fission

Previously I have posted about the fascinating challenge of understanding singlet fission [and the inverse process of triplet-triplet annihilation] in large organic molecules.  A key feature to understand is how fission can occur in less than 100 femtoseconds, suggestive of a conical intersection between excited state potential energy surfaces.

In Telluride Nandini Ananth gave a nice talk about work described in the paper

The Low-Lying Electronic States of Pentacene and Their Roles in Singlet Fission 
Tao Zeng,  Roald Hoffmann , and Nandini Ananth

Diabatic states provide a natural and powerful approach to understanding what is going on.
The authors perform high level quantum chemistry calculations to describe the relevant electronic excited states. They claim that for a pair of pentacene molecules one needs to include at least six diabatic states. Their dominant electronic configuration is shown in the schematic below.
We find that only one of the two charge-transfer states, ac, is engaged in the SF [singlet fission] in pentacene; it is the low-lying charge-transfer state that gets closer to the multi- and single-exciton states. Moreover, the ac diabat can move into degeneracy with the single-exciton states, more effectively mediating the mixing of the bright single- to and dark multiexciton diabats. This finding is different from the basic assumption of high-lying charge-transfer states in the superexchange model, emphasizing the need to adapt the general SF model to specific cases.
Aside: I wonder if this is one the few papers that Hoffmann has co-authored where strong electron correlations are central.

In more recent work, the authors have tried to pin down what is the relevant nuclear co-ordinate [vibrational mode] associated with a conical intersection. It is not the intermolecular separation but may be instead the relative orientation [twisting] of the two penatacene molecules. This has included some constructive interaction with the experimental group of Luis Campos.

From Leo Szilard to the Tasmanian wilderness

Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...