Wednesday, February 22, 2012

How many metrics do you need?

Different metrics claiming to measure research impact, such as the h-index, are receiving increasing prominence in grant and job applications. I have written before that I think they have some merit as a blunt instrument to filter applications, particularly for people at later career stages.
However, I am noticing an increasingly silly trend to cite a whole range of metrics, where I think one or two  (probably the h-index and m-index=h-index/no. of years since Ph.D) would suffice. I have seen not just a paragraph, but a whole page! of analysis citing all sorts of metrics [e.g. comparing an authors citation rate for a particular journal to the impact journal factor, no's of papers with more than 50 citations, citation rate relative to others in the field, .... the list goes on and on...]. Don't people have better things to do with their time?

In the end it becomes like university rankings. Every university seems to cite the one in which they rank the highest.

1 comment:

  1. Here's an alternate proposal (tongue-in-cheek only for implementation reasons). All PhD graduates are entitled to maintain a list of people they consider to be good scientists, preferably categorised (by subfield career stage, geography, anything else that matters). Then, just feed it into the algorithm Google uses. Assuming that good scientists are also good judges of scientists, this will come up with a suitable list, and one can judge between scientists by the number and rating of their "incoming links"!

    ReplyDelete

From Leo Szilard to the Tasmanian wilderness

Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...