Wednesday, December 28, 2011

Careers should be driven by scientific reality not metrics

More and Different by Phil Anderson is stimulating and challenging holiday reading. Here is a paragraph from an essay, "Reflections on Twentieth Century Physics," which considers how from the 1950s to the end of the century:
There was a very sharp change in the nature of a research career. The "promising" young scientists publication rate grew by factors of five to ten; the number of applications a young researcher might make for post-doctoral work or an entry-level position rose from two or three to 50. Senior scientists were overwhelmed with receiving and sending reams of letters of recommendation, which thereupon became meaningless. The numbers of meetings .. grew by factors of ten or more... Most publications became tactical in this game of jockeying one's way to the top; publications in certain prestige journals were seen as essential entry tickets or score counters rather than as serious means of communication. Great numbers of these publications were about simulations of dubious realism or relevance. Essentially, in the early part of the post-war period the career was science-driven, motivated mostly by absorption with the great enterprise of discovery, and by genuine curiosity as to how nature operates. But the last decade of the century far too many, especially of the young people, were seeing science as a competitive interpersonal game, in which the winner was not the one who was objectively right as the the nature of scientific reality but the one who was successful at getting grants, publishing in PRL, and being noticed in the news pages of Nature, Science, or Physics Today.
More and Different, page 100.

I consider this is a painfully accurate description of the current reality. It is amazing to see how few papers [one or two per year] the leaders published back in the 50s and 60s.

Is there anyway out of this undesirable situation? We can't turn back the clock. However, we can all [both junior and senior scientists] exercise some critical judgement and self control and not be completely conformed to the system and our colleagues.
Keep the science at the forefront of our minds and pre-occupations, don't jump on the latest band-wagon or become obsessed with the latest "metric" of productivity.

Photo is Anderson and Richards looking at apparatus for experiments on superfluid 4He at Bell labs (in the 1960s) taken from the AIP archives. 


  1. How do you reconcile these statements with those made earlier on this blog regarding the use of H-indices when evaluating applications from junior researchers?

  2. I do not think the h-index is of much value for evaluating junior scientists. Sorry, if I ever gave that impression.
    Hirsch designed it to evaluate "life-time" achievement. I do personally find it helpful as a filter for pre-evaluation of large numbers of applications from senior and mid-career scientists. It gives me some idea of which applications I should consider more seriously and carefully than others.

    The original post is below

  3. My question is not why are scientific careers now like this, but why were they ever not? A theoretical physics professor could easily graduate 20 PhD students before retiring. This means that there is only an average of one faculty opening available for every O(20) PhD students. Even if some PhD students don't plan on an academic career (though my experience is that most physicists, unlike many engineers, do), and the field is growing so there are new positions available as well as openings, there still seems to be a huge excess of capable young scientists over jobs for them all. I think a tendency to hyperfocus on what is easily measured is inevitable in such a situation, to give oneself any chance of finding a job in a crowded market.

    Is this background situation going to change? I believe not in any hurry. Life as a PhD student is, overall, pleasant enough that there will be plenty of applicants to fill the positions offered. PhD students are valuable enough to universities that there will always be many positions available. So I think we'll be dealing with the current pigeonhole problem for many years to come.

    So, given the large number of applicants for academic positions, can we do better than encouraging everyone to compete against each other in certain metrics and hence devalue the metrics? I'm optimistic enough to think it's possible. Fields of science are small enough that a small group of committed people can make a change, if they work out what change is needed. Is it possible for faculty to make hiring decisions based on more substantial estimations of applicants' scientific ability than how many papers they've published in which journals? I would hope so!