Tuesday, March 23, 2010

Error bars please

When we teach undergraduates laboratory subjects we are always on the case of the students about using the appropriate number of significant figures and always quoting error bars. However, it seems the scientific community does not apply the same criteria to themselves when quoting and comparing quantities such:
  • average scores on student evaluations of teaching
  • impact factors of journals
  • citation rates per paper
For example consider the graph below of impact factors of physical chemistry journals from 2001


This site reports "large changes" in impact factors of Chemistry Journals from 2007 to 2008. It reports that of ChemPhysChem changed from 3.502 to 3.636.
J. Phys. Chem. B changed from 4.086 to 4.189.

This suggests to me most of the comparisons are in the noise.
I would suggest that what we would tell our first year undergrads to do with data like this is something like:
"take the data from 5 years, average it and find the standard deviation. use the latter as your error."

This would lead to a conclusion something like, "the impact factor of J. Phys. Chem. B is 4.0 +/- 0.2 and of ChemPhysChem 3.7 +/- 0.3. this is consistent with the theory that these journals are of comparable quality...."

Futhermore, we would keep sending the students back to their lab books or marking them down until they did. So lets get rid of the double standard!

4 comments:

  1. When I was a postgrad and a tutor we had to do teaching evaluations for the tutorials we gave in the maths dept. The academic overseeing the tutors gave them back to us with a through error analysis. His conclusion was that the teaching evaluations were so noisy they were incapable of telling the difference between good and bad tutors! I remember this whenever I get bad evaluations. On the other hand, good evaluations are clearly the result of my hard work and excellence in teaching.

    ReplyDelete
  2. Could you recommend a good book or monograph that discusses this? I don't know about students at UQ but it seems that the last time I had a formal class on sig. figs. I was about 10 years old. After more than a decade exposed to quantum chemistry, I could probably use some remedial training...

    ReplyDelete
  3. I think one needs to remember that it's not the standard deviation that is the measure of error, but the standard error (std/squareroot(n-1)). This will tell you how well we know the value of the mean.

    ReplyDelete
  4. I don't think in the case of impact factor it is correct to take your sample to be impact factors of different years. Impact factor in each year is calculated from a dataset and the error should be determined for each year from that dataset. In other words youimoact factors in different years are not measurements of the same quantity repeated at so it doesn't make sense to average over them to get the error.

    ReplyDelete