Thursday, March 7, 2013

How does the spin-statistics theorem apply in condensed matter?

The spin-statistics theorem is an important result in quantum field theory. It shows that particles with integer spin must be bosons and particles with half-integer spin must be fermions.

Confusion then arises in the quantum many-body theory of condensed matter because there are theories [and materials!] which involve quasi-particles which appear to violate this theorem. Here are some examples:
  • Spinless fermions. These arise in one-dimensional models. For example, the transverse field Ising model.
  • Schwinger bosons which are spinors (bosonic spinons). These arise in Sp(N) representations of frustrated quantum antiferromagnets. They were introduced by Read and Sachdev.
  • Anyons. In two dimensions one can have quasi-particles which obey neither bose nor Fermi statistics.
How is this possible? Like a lot of inconsistencies, the answer is to look at the underlying assumptions required to prove the theorem. These include assuming:
  1. The theory has a Lorentz invariant Lagrangian.
  2. The vacuum is Lorentz invariant.
  3. The particle is a localized excitation. Microscopically, it is not attached to a string or domain wall.
  4. The particle is propagating, meaning that it has a finite, not infinite, mass.
  5. The particle is a real excitation, meaning that states containing this particle have a positive definite norm.
I think three dimensions [and a non-interacting, i.e. quadratic Hamiltonian] may be other assumptions.

In condensed matter, one or more of the above assumptions may not hold. For example, 
  • inclusion of a discrete lattice breaks Galilean invariance
  • spontaneous symmetry breaking  
  • topological order can lead to non-local excitations
  • in one dimension spinless fermions may be non-local [e.g. associated with a domain wall]
Thanks to Ben Powell for reminding me of this question.

Wednesday, March 6, 2013

Why do you keep publishing the same paper?

It is the easiest thing to do.
You get an interesting new result [a new technique or a new system] and then you publish a paper.
Now, there is lots of "low-lying fruit".
There are a few loose ends to tie up and so your write another paper providing a bit more evidence the first one was correct.
Or you apply a slightly different technique or probe to your new system.
Or you apply your new technique to a slightly different system.

Sometimes this may be reasonable, or even important.
But, other times this recycling may just reflect our laziness, lack of originality, or succumbing to the pressure to add more lines to our CV.

This issue was first brought to my attention when I was a postdoc. A research fellow suggested to me that each person in the group basically had a single paper they were "republishing". This shocked me. I am not sure this was fair but I have not forgotten the concern.

Later, a colleague was evaluating Professor X and told me he thought that "every paper X wrote was the the same." On reflection, I think this was quite harsh. X had a developed a powerful technique that they had applied to a range of systems. The technique was not easy to use and often produced definitive results. In contrast, other scientists X was being compared to might publish on a more diverse range of subjects, but not produce definitive results. Like Galileo, I think the former is more valuable.

We need to consider whether we are vulnerable to such criticism, even if it may be unfair. Unfortunately, perceptions do matter.

But, we should also ask whether it would be better if we moved on to something else, or at least diversified. Perhaps we should leave others to lie up the loose ends or take the next steps. I suspect that is what great scientists do.

I welcome suggestions of critieria to help decide when "enough is enough".

Tuesday, March 5, 2013

How many decades do you need for a power law?

Discovering power laws is an important thing in physics.
Often people claim they have evidence for one.
My question is:

Over how many orders of magnitude must the data follow the apparent power law for you to believe it?

Often I read papers or hear speakers showing just one decade (or less!).
Is this convincing? Is it important?

Personally, I find that my prejudice is that I need to see at least 1.5 decades before I even take notice. Two decades is convincing and three or more is impressive.

What do other people think?

Some of the most important power laws are those associated with critical phenomena (and scaling). The most impressive experiments see thermodynamic quantities which depend on a power of the deviation from the critical temperature by many orders of magnitude. My favourite experiment involved superfluid helium on the space shuttle and observed scaling over 7 decades!

Distinguishing quantum and classical turbulence

Classical turbulence is hard enough to understand. How about turbulence in a quantum fluid such as superfluid helium?
Is there any difference?
There is a nice viewpoint Reconnecting to superfluid turbulence which is a commentary on the 2008 PRL Velocity Statistics Distinguish Quantum Turbulence from Classical Turbulence.
A key difference between the quantum and classical case concerns the reconnection of vortices.

Monday, March 4, 2013

Interplay of dynamical and spatial fluctuations near the Mott transition

There is a nice preprint The Crossover from a Bad Metal to a Frustrated Mott Insulator by Rajarshi Tiwari and Pinaki Majumdar

They study my favourite Hubbard model: the half-filled model on an anisotropic triangular lattice, within a new approximation scheme.
Basically, they start with a functional integral representation and ignore the dynamical fluctuations in the local magnetisation. One is then left with calculated the electronic spectrum for an inhomogeneous spin distribution and averaging then averaging over these with the relevant Boltzmann weights. This has the significant computational/technical advantage that the calculation is a classical Monte Carlo simulation.
Hence, it treats spatial spin fluctuations exactly while neglecting dynamical fluctuations. 

This is a nice study because it complements the approximation of dynamical mean-theory  (DMFT): it ignores spatial fluctuations but treats dynamical fluctuations exactly.

The calculation captures some of the important properties of the model: the Mott transition, a bad metal, and a possible pseudogap phase.

This shows how an anisotropic pseudogap can arise in the model due to short-range antiferromagnetic spin fluctuations (clearly shown in Figure 5 of the paper, reproduced below). 

However, as I would expect, this approximation cannot capture some of the key physics that DMFT does: the co-existence of Hubbard bands and a Fermi liquid. This difference is clearly seen in the optical conductivity calculated by the two different methods.

There must be some connection with old studies [motivated by the cuprates] of Schmalian, Pines, and Stojkovic, of electrons coupled to static spin fluctuations with finite-range correlations [see e.g. this PRB].  

This combined importance of both dynamical and spatial fluctuations highlights to me the importance of a recent study by Jure Kokalj and I, which treated them on the same footing by using the finite temperature Lanczos method on small lattices. 

Saturday, March 2, 2013

Problems @ email.edu

Email continues to create problems for me and some of my colleagues. Here are a few things to consider and be diligent about.

Be circumspect about what you write. Assume any email you write may be forwarded, either intentionally or by mistake, to the "wrong" party.

Wait 24 hours. Don't hit the reply (or forward) key in a rush. This may lead to saying things you regret later.

Turn it off. Limit how many times a day you look at it. It can waste a lot of time and be a significant distraction. Do you really need email on your mobile phone?

Think about the informality of your style. Perhaps the formality of what you write should be in proportion to the seniority of (or your personal closeness to) the person you write to.

The amount of time you spent composing an email should be in proportion to its importance.

Don't use the reply option if the subject of your email is different to the message you are replying to. This is a lazy way to find someone's address, but just confuses or irritates the recipient.

Three years ago I wrote a similar post.

I welcome comments and war stories.


Friday, March 1, 2013

MIstakes happen

I was disappointed to find a mistake in one of the figures of my recent paper on hydrogen bonding. Fortunately, the mistake has no implications for the results in the paper. I just made a basic mistake when using Mathematica(!) to produce the figure. (Rather ironic and noteworthy given recent discussions on this blog about the dangers of Mathematica).

The upper two plots in Figure 2 of the paper should be replaced with those below.

The mistake was kindly pointed out to me by Sai Ramesh and Bijyalaxmi Athokpam during my recent visit to Bangalore. They found they could not reproduce the figure and were wondering why.

In hindsight, it is obvious that there is something wrong with the original plots. Consider the energy gap between the two adiabatic curves at the co-ordinate at which the diabatic curves cross. This energy is 2 times Delta, the off-diagonal matrix element which couples the two diabatic states. A major idea/assumption in the paper is that Delta decreases exponentially with increasing R, the donor-acceptor distance. However, in the original figure this gap does not vary with R! The new curves above clearly show this trend.

It is amazing to me that this basic point was missed by me, and by several colleagues and referees who read the paper. But I must take full responsibility.

What should be learn from this?

1. It does not matter how many times we check something, we can still make basic mistakes. The easiest person to fool is yourself.
Co-authors need to be particularly diligent in checking each others work.
A fresh set of eyes may reveal problems.
Trying to reproduce results from scratch is often a good check.

2. Just because something is published does not mean it is correct!
When we find something in a published paper that is not quite right, we should not assume it is correct.

Mistakes happen.

A mystery about science is that humans can do it

We are surrounded by scientific knowledge and have become so used to it that we often take science for granted. We may rarely reflect on the...