Tuesday, February 28, 2012

A test of pre-requisite knowledge and skills

Today I gave my PHYS4030 Condensed Matter Physics students a Pre-test, a 50 minute exam to test whether they have some of the background knowledge and skills I consider necessary for the course.

I am pretty happy with the test I came up with as it tested some basic skills as well as knowledge. But I am sure it can be improved.
I welcome comments on it and examples of alternatives.

I marked the exams and gave them back to the students. I did not keep a record of the marks and they don't count at all for the final grade.
In case some students felt they did poorly they can take the exam home, do some review, and then submit a new set of answers I will mark.

Most of the marks were in the 50-85% range, which suggests to me the test was set at about the right level. Although, of course, I really wish they were all scoring above 80%.

A few random thoughts:

1. I think this exercise also has value as a bit of a reality check and wake-up call at the beginning of the semester. It reminds students that they are going to have to do an exam in the subject. They are also going to need a lot of stuff they may have forgotten or should have mastered in the past. Normally, some students don't get the wake up call until the mid-semester exam.

2. It helps students get familiar [in a non-threatening context] with the kind of exams I set and how I mark them.

3. As I have posted before I still find it disturbing that there are final year undergrads who struggle with basic things such as
  • working with physical units
  • sketching graphs
  • basic calculus and analysis
  • interpreting a graph of experimental data 
4. If students fail the test the message should be clear: either drop the course or expect to do an extra-ordinary amount of work to catch up. Otherwise, you are going to fail. Don't say you were not warned.

Monday, February 27, 2012

Why is condensed matter physics so hard for undergraduates?

The new semester has started here and I am helping teaching PHYS4030 Condensed Matter Physics. We basically cover some fraction of Ashcroft and Mermin. For some students, particularly those with weak backgrounds, this is a difficult course.
Some find it much harder than other courses. Why?
I can think of a few reasons of why CMP can be more demanding than other courses.

It requires a working knowledge of basic thermodynamics, statistical mechanics, kinetic theory, electromagnetism, and quantum mechanics. Weak students do o.k. in some subjects but poorly in others. Weak students do not remember much from previous courses and struggle to apply what they do learn in new contexts. Hence, CMP really exposes some of these weaknesses.
Furthermore, one has to understand how to integrate all this knowledge.

There is an emphasis on
  • orders of magnitude estimates 
  • making approximations
  • relating theory and experiment
On the other hand, for these same reasons motivated and well prepared students can find a condensed matter course very stimulating and interesting. Such a course also provides some skills (model building and testing, synthesis, estimates) which are useful in much broader contexts.

I welcome alternative thoughts.

Friday, February 24, 2012

What is the ground state of solid hydrogen?

The Journal of Chemical Physics website has a fascinating podcast with Roald Hoffmann, Neil Ashcroft, and Vanessa Labet talking about a series of 4 papers they have just published about "molecular" hydrogen under pressure. They illustrate some very rich and subtle physics and chemistry.

It highlights the importance of both physical and chemical insight, simple models, and how there are still these old problems waiting to be solved.

Questionable paper titles

There is an interesting article by Ben Goldacre in the Guardian newspaper which nicely summarises research on the following questions:

Will asking a question in the title get your paper cited more?
No. But it will be downloaded more!

What is the evidence that having your paper mentioned in the New York Times will increase  its citation rate?

Thursday, February 23, 2012

Overdoped cuprates are an anisotropic marginal Fermi liquid II

Jure Kokalj, Nigel Hussey, and I have just completed a paper, Transport properties of the metallic state of overdoped cuprate superconductors from an anisotropic marginal Fermi liquid model.

We show how a relatively simple model self-energy [considered earlier in this PRL] gives a nice quantitative description of a wide range of experimental results on Tl2201 including intra-layer resistivity, frequency-dependent conductivity, and the Hall resistance. No new parameters are introduced beyond those needed to describe angle-dependent magnetoresistance experiments from Nigel's group.

One thing I found striking was just how sensitive the Hall conductivity is to anisotropies in the Fermi surface and the scattering rate [a point emphasized by Ong with his beautiful geometric interpretation].
We also show that our model self-energy successfully describes both the resistivity (with a significant linear in temperature T dependence) and the Hall angle ( ~T^2) without invoking exotic new theories.

A key outstanding challenge is to connect our model self-energy [which is valid in the overdoped region] to possible forms for the underdoped region where the pseudogap occurs.

We welcome comments.

Wednesday, February 22, 2012

Strongly correlated electron systems in high magnetic fields IV

The observed sensitivity of strongly correlated metals to laboratory magnetic fields of the order of 5-50 Tesla presents a significant theoretical puzzle and challenge. There have been very few calculations on lattice models such as the Hubbard model in a magnetic field. The few calculations that have been done only see very small perturbative effects on the scale of laboratory fields. They require huge magnetic fields of the order of a thousand Tesla for any significant effect, such as a change in ground state.

In terms of coupling of the magnetic field to the orbital degrees of freedom, most studies have been on ladder models (e.g. this PRB), at zero temperature, and only see significant effects at extremely high fields, of the order of thousands of Tesla, when there is a quantum of magnetic flux through a single lattice plaquette. The smaller field scale of the upper critical field for superconductivity corresponds to the longer length scale of a superconducting coherence length. This longer scale may only be accessible on sufficiently large square lattices.

How many metrics do you need?

Different metrics claiming to measure research impact, such as the h-index, are receiving increasing prominence in grant and job applications. I have written before that I think they have some merit as a blunt instrument to filter applications, particularly for people at later career stages.
However, I am noticing an increasingly silly trend to cite a whole range of metrics, where I think one or two  (probably the h-index and m-index=h-index/no. of years since Ph.D) would suffice. I have seen not just a paragraph, but a whole page! of analysis citing all sorts of metrics [e.g. comparing an authors citation rate for a particular journal to the impact journal factor, no's of papers with more than 50 citations, citation rate relative to others in the field, .... the list goes on and on...]. Don't people have better things to do with their time?

In the end it becomes like university rankings. Every university seems to cite the one in which they rank the highest.

Monday, February 20, 2012

Another triangular lattice Heisenberg antiferromagnet

Following my recent post An end to a frustrating materials search, Radu Coldea brought to my attention a 2011 PRL which reports thermodynamic studies of the analogous Cu compound, Ba3CuSb2O9.

The authors suggest this material is also described by the antiferromagnetic Heisenberg model on the isotropic triangular lattice. They estimate the exchange interaction J ~ 32 K, but observe no magnetic ordering down to 0.2 K. This leads the authors to suggest the compound has a spin liquid ground state. It should be stressed that this absence of magnetic ordering is inconsistent with the material being described by a model with purely nearest neighbour coupling.

They observe a large Schottky anomaly in the specific heat and suggest that 5% of the Cu2+ ions are on Sb sites.
It is not clear to me
-what affect these large impurity contributions have to the subtractions which are performed to get the pure lattice magnetic contribution to the susceptibility and specific heat.
-whether the disorder present would be sufficient to prevent magnetic ordering.

It would be nice to see comparisons of the experimental data with calculations of the temperature dependence of the specific heat for the Heisenberg model on the triangular lattice [see this example this PRB by Bernu and Misguich]. They show the specific heat should have a maximum at a temperature of about 0.8J. Experimentally, it is at about 10 K, suggesting a much smaller value of J~12 K than that deduced from the experimental susceptibility.

Regardless of the above issues discovery of this new material is an exciting development.

Get expert feedback about your teaching

The Weekend Australian newspaper had an interesting article about the success of Asian high schools [particularly Shanghai, Hong Kong, Korea, and Singapore]. The article is largely based on a recent report from the Grattan Institute, an independent Australian think tank.

One thing the report emphasizes is the value of teachers getting feedback from their colleagues (and other expert classroom observers) about their teaching. At UQ we are increasingly being required/encouraged/expected to do this.

Saturday, February 18, 2012

An end to a frustrating materials search

The Heisenberg model for anti-ferromagnetically coupled spins on an isotropic triangular lattice is one of the most widely studied (and mentioned) lattice models in quantum many-body physics. This is in spite of the absence of any actual materials described by it! This incredible theoretical interest was stimulated by Anderson's 1987 Science paper on an RVB theory for cuprate superconductors. He invoked his earlier work with Fazekas suggesting that the model had a spin liquid ground state. This turned out to be incorrect: the model does exhibit magnetic ordering and spontaneously broken symmetry, just like conventional antiferromagnets.

However, the materials drought may be over. There is a recent PRL, Experimental Realization of a Spin-1/2 Triangular-Lattice Heisenberg Antiferromagnet
by Yutaka Shirata, Hidekazu Tanaka, Akira Matsuo, and Koichi Kindo

They report magnetisation measurements on Ba3CoSb2O9, and argue that the magnetic Co2+ ion has a spin 1/2 [Kramer's doublet] arising from a combination of crystal field and spin-orbit coupling effects. The temperature and magnetic field dependence of the magnetisation are quantitatively consistent with a spin isotropic Heisenberg model on the triangular lattice with J=18 K and g=3.8. Of particular note, is that they observed a theoretically predicted plateau in the magnetisation, due to a new magnetically ordered phase, which is unstable in the classical model.

These experiments were performed on powders. Hopefully someone will be able to grow large single crystals suitable for inelastic neutron scattering. This enable testing theoretical predictions that due to the interplay of frustration and quantum fluctuations the spin excitation spectrum is quite anomalous. In particular, it should 
exhibit "rotons" which are remnants of RVB physics.


An important theoretical goal will be to derive the effective Hamiltonian and its parameter values from DFT-based electronic structure calculations, as was done recently [and rather nicely] for Cs2CuCl4 and Cs2CuBr4.

Thursday, February 16, 2012

Deconstructing the Nernst effect in electron doped cuprates

The graph below shows the temperature dependence of the Nernst signal measured in the normal metallic state of a family of electron doped cuprates Pr_{2-x}Ce_xCuO_4. It is taken from a 2007 PRB by Li and Greene.
A few noteworthy features
-the signal is proportional to B and so not due to superconducting fluctuations
-the signal is proportional to temperature at low temperatures but has a non-monotonic temperature dependence
-the magnitude of the linear temperature dependence is an order of magnitude smaller than predicted by the simple quasi-particle theory of Behnia.

The authors consider how the data can be explained by a two-band with both electrons and holes, but point out such a model is inconsistent with the single hole Fermi surface seen in ARPES.

It would be interesting to re-consider this data in light of the recent experiments on these materials which showed a linear temperature dependence of resistivity (and thus a quasi-particle scattering rate) with a magnitude proportional to Tc [as in overdoped hole doped cuprates].

Wednesday, February 15, 2012

Will a chemist ever win the Nobel Prize in Physics?

There is an interesting editorial by Roald Hoffmann, What, Another Nobel Prize in Chemistry to a Nonchemist? in the latest issue in Angewandte Chemie International. 

Hoffmann, thoughtfully argues that chemists should not be upset [some are] that the chemistry prize seems to be increasingly awarded to people from outside chemistry departments [esp. biochemistry and molecular biology, but also physics and materials science].

He also asks the interesting question: will a chemist ever win the Nobel Prize in physics? He argues that Bednorz and Muller who discovered superconductivity in cuprate compounds might be considered chemists. I don't buy that. Their education, employment, and publications were clearly in the physics.

I welcome possible answers to Hoffmann's question.
My answer might be: in principle, yes; but in practice no. I think this may be partly because of the arrogant reductionism of influential parts of the physics community.
Possible areas impacting physics and to which chemists make important contributions include synthesis of new materials with exotic ground states, single molecule electronics, single molecule spectroscopy, glasses, ....

I thank Seth Olsen for bringing the article to my attention.

Monday, February 13, 2012

Conflicted interests

Today's Australian newspaper has an article on page 3, `Campaign' targets depression guru Ian Hickie. Hickie is a psychiatrist who is the Australian governments Mental Health Commissioner and has influenced significant changes in government policy. My limited perspective is that many of these changes are positive. They have helped reduce social stigmas associated with mental health problems and made treatments more accessible. However, a major beneficiary of these changes have been drug companies. Furthermore, there are questions about whether anti-depressants are being over-prescribed and cognitive therapies are being under-valued, both to the detriment of patients best interests.


Hickie is being criticised because he published a paper in The Lancet commending a drug produced by a company that he had financial and grant ties to. The methodology and conclusions of the paper are being criticised.


I believe that when there are large amounts of money (company profits and/or grant funding) and power (prizes and careers) at stake it is hard for the beneficiaries to be objective and perform the best science. That is human nature! This is regardless of the sincerity and best intentions of investigators and public declarations of possible conflicts of interest. Adding to the mix politics and government policy becomes even more problematic.

Saturday, February 11, 2012

Against joint theory-experiment papers

In an early post I argued against experimentalists feeling a compulsion to present a theoretical "explanation" for their data. I should have also expressed concern about how in a similar vein many theory papers waffle on about how the calculations presented are relevant to specific experiments.

Reading through More and Different, I found an interesting Physics Today column that Phil Anderson wrote in 1990, Solid State Experimentalists: Theory should be on tap, not on top.
He argues for re-instatement of Cornelius Gorter's rule that theory and experiment should be published in separate papers. It is quite possible that they will both not be correct.
Much more serious is the distortion of priorities, of communication and of the refereeing process that occurs when excessive weight is given to theoretical interpretation. We don't want to lose sight of the fundamental fact that the most important experimental results are precisely those that do not have a theoretical interpretation; the least important are often those that confirm theory to many significant figures.
Although written in the early days of high-Tc superconductivity Anderson's arguments and concerns seems just as relevant today.

Friday, February 10, 2012

Pauling points II

Bigger is not always better!

Previously I posted about Pauling points in Quantum chemistry: as one increases the size or sophistication of a calculation [e.g. through a larger basis set, a higher order or perturbation theory, or a larger active space] one does not necessarily get closer to the correct answer. Sometimes the quality of results degrade with increasing the number of degrees of freedom.

My colleague, Seth Olsen has just published a nice paper in J. Phys. Chem. A, A Four-Electron, Three-Orbital Model for the Low-Energy Electronic Structure of Cationic Diarylmethanes: Notes on a ‘Pauling Point’. For the specific case of the dye Michler's Hydrol Blue he systematically increases the size of the active space and observes the effect on the two lowest lying excited singlet states. One has to increase the active space from 3 orbitals to 15 to before one recovers the same transition difference densities. This means increasing the number of Slater determinants by 6 orders of magnitude!

Thursday, February 9, 2012

Taking control of publishing

The issue of scientists disgruntlement with commercial publishers has made it into The Economist. The excellent article The Price of Information describes a campaign started by mathematicians to boycott Elsevier journals.

I thank Seth Olsen for bringing the article to my attention.

Is there a dynamical particle-hole asymmetry in the cuprates?

There is a very interesting preprint Dynamical particle-hole asymmetry in the cuprate superconductors by Sriram Shastry.
Most theories of the metallic state (including Fermi liquid and marginal Fermi liquid theories) assume/assert/embody that adding electrons or holes to the ground state requires the same amount of energy. In a similar vein quasi-particle peaks in spectral functions should be symmetric.

There are two important exceptions to theories which assert dynamical particle-hole symmetry:

The hidden Fermi liquid theory of Anderson

The extremely correlated Fermi liquid theory of Shastry

Shastry points out that there is already some experimental evidence for asymmetry. The figure below [taken from a Science paper by Pasupathy et al. about a different issue] shows the differential conductivity [roughly proportional to the local density of states] from an STM measurement on an overdoped sample of BSCCO. The relevant point is that the background has a significant slope. If there was particle-hole symmetry the background would be flat, as it is for simple metals.
This slope is much larger than the small asymmetry expected from band structure that arises due to the proximity of the Fermi energy to a van Hove singularity.
In light of the theoretical issues discussed above this seems to be an important result and needs to be checked.

There are a few puzzles.
1. Earlier tunneling experiments do not see such a large asymmetry.
2. As the doping decreases the asymmetry does not increase, which is what I would have expected, since the system is effectively becoming more correlated.
3. The above data show no sign of a van Hove singularity.

I thank Jure Kokalj for helpful discussions about this topic.

Wednesday, February 8, 2012

Who cares about crystal violet?

Physicists might think Crystal Violet is just some obscure dye, which has been used at times to treat sexually transmitted diseases. The dye is not only of industrial and historical significance but still presents some fundamental scientific questions.

In vacuum the molecule has three possible Lewis structures (valence bond structures). One  is shown above. The others involve the positive charge on one of the two other nitrogen atoms. The molecule should have D_3h symmetry and the ground state should be non-degenerate and the first excited state two-fold degenerate.

However, in 1942 G.N. Lewis pointed out that the optical absorption spectrum showed a shoulder and suggested that at room temperature two structural isomers may be accessible.

A fascinating article, Crystal violet’s shoulder by Scott Lovell,  Brian J. Marquardt and Bart Kahr, reviews how the question of whether the shoulder arose from two "ground" states or two close excited states was still not resolved in 1999.
The main possible source of symmetry breaking would be the local environment (solvent, counter-ions, neigbouring CV molecules, ...).
The excited state symmetry could be reduced by a Jahn-Teller effect.

As an aside I give two fascinating quotes from the article:
as the active ingredient in carbon paper it [Crystal Violet] has stained more hands than any other compound...
The determination of the X-ray structure of CV+Cl- was first attempted in 1943 by Krebs, a wartime worker at the I.G. Farben Oppau explosives factory which was eventually bombed by the Allied Forces. On 25 March 1945, British and American investigators organized bands of local Germans to dig through the factory rubble in order to retrieve scientific documents that might aid the Allies in the defeat of Japan. During this search they unearthed a report by Krebs describing his preliminary crystallographic studies, dated 15 June 1943. It was cataloged with the reports of the British Intelligence Objectives Sub-committee (BIOS),one of several special scientific intelligence units formed to collect technical information of military value from the Germans. BIOS microfilms can be obtained from the US Library of Congress.
A 2010 JACS paper claims to finally show that the origin of the shoulder is due to two excited states whose degeneracy is lifted by the solvent.

First Hyperpolarizability Dispersion of the Octupolar Molecule Crystal Violet: Multiple Resonances and Vibrational and Solvation Effects
Jochen Campo, Anna Painelli, Francesca Terenziani, Tanguy Van Regemorter, David Beljonne, Etienne Goovaerts and Wim Wenseleers

A key aspect of the article is using an effective Hamiltonian acting on the relevant valence bond structures...
but more on that later...

Tuesday, February 7, 2012

Seeing the outcome of an encounter with a conical intersection


There is a nice JACS paper Population Branching in the Conical Intersection of the Retinal Chromophore Revealed by Multipulse Ultrafast Optical Spectroscopy
by Goran Zgrablić, Anna Maria Novello, and Fulvio Parmigiani.

It looks at how the polarity of a solvent changes the outcome of the photo-isomerisation (trans-cis) reaction of a retinal molecule. A similar reaction is responsible for vision: it seems that the protein environment is key to the speed and selectivity of the reaction.

This experimental study finds that as one increases the polarity of the solvent (as measured by the static dielectric constant) the branching ratio for the blue to green/purple curves in the figure above increases.

This is argued to be broadly consistent with a theoretical study published earlier this year, Dynamical Friction Effects on the Photoisomerization of a Model Protonated Schiff Base in Solution, by João Pedro Malhado, Riccardo Spezia, and James T. Hynes.

A key insight seems to be the schematic Figure below which contrasts the case of water (top) and acetonitrile (below). It shows the ground and excited state PESs as a function of the twist angle (torsion) associated with cis-trans isomerisation.
Water is claim to have "fast" dynamics which increases the probability of direct transition to the lower surface. In acetonitrile the system can stay longer on the excited state surface and away from the conical intersection seam allowing it to follow the dashed trajectory above, leading to a larger formation of cis.

However, the authors point out how their study involves several key debatable assumptions including
  • nuclear and solved dynamics can be modelled by a Generalised Langevin Equation (GLE); this means it is classical (I am not sure to what extent it is overdamped)
  • non-adiabatic transitions between the excited and ground state potential energy surfaces (PESs) can be modelled by the surface hopping algorithm of Tully
  • a simple parametrisation of the PESs
  • rough estimates for the frictional parameters for the GLE
To me all this underscores just how a quantitative description of the quantum dynamics near a conical intersection in the presence of a dissipative environment is such a basic and important outstanding problem.

Monday, February 6, 2012

The grand challenge of science in the developing world

The two faces of chemistry in the developing world is an interesting Commentary in Nature Chemistry by the legendary C.N.R. Rao.
While acknowledging the great advances that India, China, and Brazil have made the past few decades he stresses the incredible challenge to the least developed countries.

There is one statement in the article I disagree with. He states:
I do not know of a single institution in India that is comparable to any of the best institutions in the advanced countries. 
I think that, at least in condensed matter physics and chemistry (physical, theoretical, materials), I would rank the Indian Institute of Science (IISc) in Bangalore, ahead of most Australian universities. Some of my Indian colleagues tell me this strength is largely a legacy of C.N.R. Rao. So perhaps he is being modest about his contributions. 
Unfortunately though this favourable comparison also exposes the weakness of Australian universities.... 

Friday, February 3, 2012

Befuddled by planckian dissipation

I am rather confused by a Nature News and Views Why the temperature is so high? by Jan Zaanen. A key claim in the article is that in quantum systems there is a fundamental limit to the "dissipation time"
 h/(2 pi k_B T).
(An earlier post on this article gives more background).

Here are some of Zaanen's claims:
In fact, according to the laws of quantum physics, it is impossible for any form of matter to dissipate more than these metals do....
the laws of quantum physics forbid the dissipation time to be any shorter at a given temperature than it is in the high-temperature superconductors. If the timescale were shorter, the motions in the superfluid would become purely quantum mechanical, like motion at zero temperature, and energy could not be turned into heat. In analogy with gravity, this timescale could be called the 'Planck scale' of dissipation (or 'planckian dissipation'). 
It is not clear if these statements concern dissipation of the energy of quasi-particles and whether or not they are fermionic.
Furthermore, I can find no reference which supports this claim that "the laws of quantum mechanics" require such a fundamental limit.
It seems to me, if it were true one could not have a electron-phonon system with a dimensionless coupling constant lambda larger than one. [Above the Debye temperature the electron scattering rate (times hbar) is roughly  lamba T].

I welcome clarifying comments.

Thursday, February 2, 2012

Teaching thermo without stat mech

For the past ten years I have been involved in teaching a course "Thermodynamics and Condensed Matter Physics" to second year physics majors. In the past I have posted some of lectures which cover the second half of the course, applying thermo to condensed matter. I did not design the course but inherited it. [For department historical/political/personal reasons the condensed matter part was actually originally crystallography!]

I quite like the course because I believe (unlike many people) that students should learn and master macroscopic thermodynamics before they learn statistical mechanics. Entropy should be introduced as a measure of the relative accessibility of the equilibrium (macroscopic) states in an adiabatically isolated system (i.e. irreversibility) rather than as related to the possible number of microstates in an system.

However, finding a textbook is not easy. Students find Callen, the one originally used for the course, too difficult, and it is very expensive. We have tried using the first few chapters of Atkin's Physical Chemistry. The publisher provides .ppt slides of all the figures which is useful. However, it is a chemistry book, is expensive, very heavy, and the Solutions Manual is full of errors [in spite of being in the Ninth Edition].

So here is our plan (due to my co-lecturer Joel Corney) for this year: to follow an Introduction to Thermal Physics by Schroeder, chapters 1-5, but skip the sections which give a microscopic treatment of entropy. Some of that material will be substituted with some readings from Atkins.

The best part of Schroeder's book are the extensive problems. They are very interesting, covering a diverse range of topics from black holes to meteorology, and there is great Solutions Manual! I have used the book before to teach Stat. Mech.

I welcome suggestions about other texts.

Wednesday, February 1, 2012

Have journals become redundant and counter productive?

"Is this [publication in high profile journals], then, an efficient way to marshal evidence in support of grant proposals, appointments, promotions or fellowships? Of course not - it is madness. The time is overdue to abolish journals and reorganise the way we do business"

David Mermin, Publishing in Computopia, Physics Today, May 1991

[This was written a few months before the arXiv started. See also the followup column, What's wrong in Computopia? in April 1992 which was after the arXiv started]

Given the way the world is today, rather than one hundred years ago, if we wanted to design the most effective way to promote good science would we come up with the current system of scientific journals?

Keep in mind the following:

Journals consume vast amounts of resources including
- as much as 60 per cent of the total budget of many university libraries
  [see Academic publishers make Murdoch look like a socialist].
- the time of referees and editorial board members

The "best" journals such as Science and Nature
-skew fields and papers towards `sexy' and grandiose claims
-fail to stop the publication of fraudulent science such as that of Henrik Schon or of papers which fit 20 plus data points to a curve with 17 parameters

The arXiv has now been going for twenty years [see this commentary by the founder Paul Ginsparg in Nature last year]. Functionally, physicists almost exclusively use it to obtain and distribute scientific information independent of the peer review, ranking, and impact factors of journals. It is hard to make a case that this has led to a decline in scientific standards in the physics community. [Why is there no arXiv in chemistry? has no clear answer].

When using the arXiv, I anticipate people make their own judgements of the importance, validity, and significance of papers largely based on the actual scientific content of the paper. I suspect the reputation of the authors also comes into play at times. But I would contend that reputations are still largely based on actual scientific track record rather than on more debatable criteria such as numbers of Nature papers.

So why do journals still exist? Perhaps the two main reasons are:
- the vested interests from publishers
- the institutional inertia within funding agencies and employers who still consider them essential for ranking applicants for funding, jobs, or promotion. One might argue that this inertia is driven by intellectual laziness (i.e. bean counting).

In an ideal world, abolition of journals would free up significant resources that could be used instead to
-employ more people to actually do the science, including to write good review articles that meaningfully assess, filter, and critique the vast seas of scientific information
-provide time for people to make assessments of applicants based on their actual scientific achievements rather than their performance on metrics.

But, how do we get there? Will we ever?

From Leo Szilard to the Tasmanian wilderness

Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...