This week I had an interesting experience. I was doing a calculation and comparing my result to experiment. The comparison was poor, with a discrepancy of a factor of about two. This was disappointing, but then I decided that the theory was just too simple and one should not experiment anything better than qualitative agreement... I just had to accept this.
But then I found a mistake in my Mathematica code. I realised I had to check everything more carefully. .. One of my variables I had defined incorrectly... I redid the plot. The agreement of theory and experiment was excellent.
But, now there is a real danger. I could stop checking for errors. Afterall, given I already found a couple there may be another one which will lead to new discrepancies.
I will let you know if I find any. But, I have to confess the motivation to find errors is less than it was..
I wonder how often this happens in science. I think I recall that there are some famous historical examples, e.g. that over years the value of the speed of light and the charge on the electron have drifted, but at any particular time peoples values have always been within a standard deviation of the latest measurements.
Just remember Feynman's warning: "The easiest person to fool is yourself."
Subscribe to:
Post Comments (Atom)
From Leo Szilard to the Tasmanian wilderness
Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...
-
Is it something to do with breakdown of the Born-Oppenheimer approximation? In molecular spectroscopy you occasionally hear this term thro...
-
If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new mat...
-
I welcome discussion on this point. I don't think it is as sensitive or as important a topic as the author order on papers. With rega...
I fear this confirmation bias more than any other error in the scientific method. Solid state physics generally doesn't have the statistical methods to overcome this problem that other fields have (eg, well designed medical or astronomical studies, where there is a specific hypothesis leading to study design leading to statistical conclusions). When results agree with my biases, I don't look too hard for the errors. When they disagree, I find myself finding all the reasons the results may be flawed. I know this is human nature, but is it good science?
ReplyDeleteJ, thanks for your comment and your honesty. Here is what Feynman said about the charge on the electron:
ReplyDeleteWe have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off, because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn't they discover that the new number was higher right away?
It's a thing that scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong--and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that.
We've learned those tricks nowadays, and now we don't have that kind of a disease.
I wonder if the last comment is facetious.