Thursday, November 29, 2012

Impact factors have no impact on me

There seems to be a common view that on CVs (and grant applications) people should list the Impact Factors for each journal in which they have a paper.
To me this "information" is just noise and clutter.
I do not include it in my own CV or grant applications.
Why?

1. IFs just encode something I know already.
Nature > Science > PRL ~ JACS > Phys. Rev B ~ J. Chem. Phys. > Physica B ~ Int. J. Mod. Phys. B > Proceedings of the Royal Society of Queensland .....

2. There is a large random element in success or failure to get an individual paper published in a high profile journal. e.g., who the referees are.

3. The average citations of a journal is not a good measure of the significance of a specific paper. There is a large variance. What really matters is how much YOUR/MY specific paper in that journal is cited in the long term. Unfortunately, in most cases it is hard to know in less than 3-5 years.

4. Crap papers can get published in Nature and Science. Hendrik Schon published almost 20 papers in Nature and Science. On the other hand, Nobel Prize winning papers are sometimes published in Phys. Rev. B (e.g. giant magnetoresistance).

5. I don't need to know the actual IF of a journal with an impact factor of one or less in order to know that it is a rubbish journal. I already know that because I virtually never read papers in such journals simply because they virtually never contain anything that is significant, interesting, or valid. My "random" meanderings through the literature virtually never lead me there.

6. I remain to be convinced that reporting IFs to more than 2 significant figures and without error bars is meaningful.

I fail to see that alternative metrics such as the Eigenfactor resolve the above objections.

The only value I see in IFs is helping librarians compile draft lists of journals to cancel subscriptions to in order to save money.

I am skeptical that IFs are useful for comparing the research performance of people in different fields (e.g. biology vs. civil engineering vs. psychology vs. chemistry).

And in the end... what really matters is whether the paper contains interesting, significant, and valid results... Actually looking as some of an applicant's papers and critically evaluating them is the best "metric". But that requires effort and thought...

7 comments:

  1. This is helpful Ross.

    But surely the average IFs of the good journals in different fields is helpful.

    Math is the standard example for extremely low average citations, and (for me) biology is the opposite.

    If you had to (for some reason) compare mathematicians and biologists, wouldn't this info be helpful?

    Otherwise I agree, especially due to the huge random factor in getting accepted to a good journal. I hate that my career depends, in whatever small or large part, to having PRLs, and that people refer to it as a lottery. That's not right.

    ReplyDelete
    Replies
    1. Occasionally one does have to compare mathematicians and biologists (e.g. if they both apply for a Research Fellowship offered for all departments in the University). Perhaps, knowing the relative IF for the best maths and the best biology journal would help to renormalise candidates citations for comparing them.
      However, a committee member can easily access this information. There is no need for the applicants to clutter up their applications with all the noise.
      Furthermore, I would hope that in such a situation letters of reference would far outweigh such dubious comparisons of metrics.

      Delete
    2. Ross, I agree that IFs are don't have a large amount of value. But having been asked to rank research proposals across science disciplines, it is pretty useful as one of the points of comparison, especially if discipline context is provided. Also, even across physics, there are sub-disciplines where I have little idea about the relative ranking of journals.

      And, when I have to read twenty applications, I would much prefer to have such basic information as IFs and citations put in front of me in an applicants list of publications, rather than having to resort to web searches. So personally I recommend people include such information.

      Delete
  2. I think that the number of papers published and the total number of citations are useful metrics, but impact factors can be taken with a grain of salt as you point out. If only we could convince funding agencies of this.

    ReplyDelete
    Replies
    1. I think the ratio no. of citations/no. of papers in more meaningful. If someone has published 200 papers and has 1000 citations (it does happen!) I am not very impressed.

      This is one of the reasons why Jorge Hirsch invented the h-index. I still think when taken on the appropriate time scale (at least 10+ years) and other caveats that the h-index is a meaningful coarse-grained measure of scientific contributions.

      Delete
  3. I don't think universities are always inclined to hire the best scientists because of the way the funding system works. Instead, it's to their advantage to hire the candidate most likely to receive grant money, i.e. the one who looks best on paper. A candidate who scores well in the metrics that the funding agencies use is literally more valuable than one who doesn't, hence the pressure to highlight these items on the CV. Personally, I feel these metrics detract in the same way that fancy transitions take away from a scientific presentation - too much focus on flash - but maybe that's what the funding agencies respond to.

    So, it's a response to the system.

    ReplyDelete
    Replies
    1. I agree with your concerns and that what is happening is a response to the system.

      But, I stand by main points.
      Furthermore, even if one is concerned with numbers of citations of an individuals papers, the Impact Factor is not a helpful metric for predicting that.

      Delete