He has an excellent piece The High-impact-factor syndrome on The Back Page of the American Physical Society News.
Before giving a response I highlight some noteworthy sentences. I encourage reading the article in full, particularly the concrete recommendations at the end.
.... if you think it’s only a nightmare,.... , you need to wake up. Increasingly, scientists, especially junior scientists, are being evaluated in terms of the number of publications they have in HIF journals, a practice I call high-impact-factor syndrome (HIFS).
I’ve talked to enough people to learn that HIFS is less prevalent in physics and the other hard sciences than in biology and the biomedical sciences and also is less prevalent in North America than in Europe, East Asia, and Australia. For many readers, therefore, this article might be a wake-up call; if so, keep in mind that your colleagues elsewhere and in other disciplines might already have severe cases. Moreover, most physicists I talk to have at least a mild form of the disease.
Do you have HIFS? Here is a simple test. You are given a list of publications, rank-ordered by number of citations, for two physicists working in the same sub-discipline. All of the first physicist’s publications are in PRL and PRA, and all of the second’s are in Nature and Nature Physics. In terms of the citation numbers and publication dates, the two publication records are identical. You are asked which physicist has had more impact.
If you have even the slightest inclination to give the nod to the second physicist, you are suffering from HIFS.
In the case of junior scientists, the situation is more complicated [than senior scientists]. Their publication records are thinner and more recent. The focus shifts from evaluating accomplishment to trying to extract from the record some measure of potential. ...... even if you think publication in HIF journals is informative, it is not remotely as instructive as evaluation of the full record, which includes the actual research papers and the research they report, plus letters of recommendation, research presentations, and interviews. When HIFS intrudes into this evaluation, it amounts to devaluing a difficult, time-consuming, admittedly imperfect process in favor of an easy, marginally informative proxy whose only claim on our attention is that it is objective.
Relying on HIF leads to poor decisions, and the worse and more frequent such decisions are, the more they reinforce the HIFS-induced incentive structure. As physicists, we should know better. We know data must be treated with respect and not be pushed to disclose information it doesn’t have, and we know that just because a number is objective doesn’t mean it is meaningful or informative.
Even more pernicious than applying HIFS to individuals is the influence it exerts on the way we practice physics. Social scientists call this Campbell’s law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” This social-science law is nearly as ironclad as a physical law. In the case of HIFS, there will be gaming of the system. Moreover, our research agenda will change: If rewards flow to those who publish in HIF journals, we will move toward doing the research favored by those journals.
No matter how highly you think of the editors of the HIF journals, they are independent of and unaccountable to the research community, they do not represent the entire range of research in the sciences or in physics, and their decisions are inevitably colored by what sells their magazines.I just list a few of the concrete and excellent recommendations Carl makes. We are not helpless.
What to do?
Educate administrators that the HIF shortcut, though not devoid of information, is only marginally useful. For any scientist, junior or senior, an evaluation of research potential and accomplishment requires a careful consideration of the scientist’s entire record. A good administrator doesn’t need to be taught this, so this might be a mechanism for identifying and weeding out defective administrators.I agree with all of the above. However, I do differ with Carl on a few points. These disagreements are a matter of degree.
Take a look at the San Francisco Declaration on Research Assessment (DORA) which is aimed directly at combating HIFS. Consider adopting its principles and signing the declaration yourself. DORA comes out of the biosciences; signing might help bioscientists put out the fire that is raging through their disciplines and could help to prevent the smoldering in physics from bursting into flame.
Include in ads for positions at your institution a standard statement along the following lines: “Number of publications in high-impact-factor journals will not be a factor in assessing research accomplishments or potential.”
Adopting this final recommendation would send an unambiguous message to everybody concerned: applicants, letter writers, evaluators, and administrators. Making it a commonplace could, I believe, actually change things.
1. I groaned where I saw that he lists the impact factors of journals with four and five significant figures. Surely two or three is appropriate.
2. I am more skeptical about using citations to aid decisions. I think they are almost meaningless for young people. They can be of some use in filtering for senior people. But I also think they largely tell us things we already know.
3. Carl suggests we need to recommit to finding the best vehicle to communicate our science and select journals accordingly. I think this is irrelevant. We know the answer: post the paper on the arXiv and possibly email a copy to 5-10 people who may be particularly interested. I argued previously that journals have become irrelevant to science communication. Journals only continue to exist for reasons of institutional inertia and career evaluation purposes.
4. My biggest concern with HIFS is not the pernicious and disturbing impact on careers but the real negative impact on the quality of science.
This is the same concern as that of Randy Schekman.
Simply put, luxury journals encourage and reward crappy science. Specifically they promote speculative and unjustified claims and hype. "Quantum biology" is a specific example. Just a few examples are here, here, and here.
Why does this matter?
Science is about truth. It is about careful and systematic work that considers differ possible explanations and not claiming more than the data actually does or does not show.
Once a bad paper gets published in a luxury journal it attracts attention.
Some people just ignore it, with a smirk or grimace.
Others waste precious time, grant money, and student and postdoc lives showing the paper is wrong.
Others actually believe it uncritically. They jump on the fashionable band wagon. They continue to promote the ideas until they move onto the next dubious fashion. They also waste precious resources.
The media believes the hype. You then get bizarre things like The Economist listing among their top 5 Science and Technology books of the year a book on quantum biology (groan...).
No comments:
Post a Comment