Sunday, January 16, 2011

Are your results reproducible?

Reproducibility is meant to be basic tenet of science. Explicitly, a scientific paper is expected to report the results of experiments or calculations so that another scientist can reproduce them. This may sound very basic. But I fear this is beginning to be lost, particularly in papers that are computationally intensive and involve systems with many degrees of freedom.

A colleague was commenting to me recently that it has taken his group a very long time to reproduce the details of the calculations of another (prominent) group concerning a specific protein. The papers are sparse on details (such as details of basis sets, convergence criteria, starting geometries, ...). It turned out that the reason it was hard to reproduce the results was that the calculations actually were not very well converged.
This should not be. It is no excuse to claim that such details will make the paper too long or not be of interest to the average reader. Such technical details do not have to be in the actual paper.  It is possible to deposit supplementary material on journal websites.

So when you are writing a paper make sure that all parameters you use in any calculation are specified. The same applies to papers mostly involving analytical calculations. All parameters and equations used to produce graphs need to be specified.

If you are refereeing a paper make sure it has enough detail. Otherwise it should not be published.

It is interesting that this point is made by Fritz Schaefer in his requirements for first drafts for papers from his group.

I realise I also need to be more diligent about this as well.

3 comments:

  1. Can not fully agree with your statement. As long as the problem is well posed and the initial assumptions and approximations are given, one can not require that the authors present all details of their solution, which is often impracticable. The basis sets, convergence criteria, etc. are not parts of the problem statement, but only parts of its solution. Using another basis set, for example, should not lead to a different answer, as this would mean that the solution is simply wrong. And the example you gave seems to be referring to such a case, which is an issue of correctness (authors' mistake), not of reproducibility.

    ReplyDelete
  2. I think the point here is that if the results are that unstable, then nothing can be claimed to have been learned from them (I.e. There was no science actually done). In this case, there is no clear mandate to publish the work in a scientific journal in the first place. Furthermore, the instability ITSELF may be an important result, in which case this sloppiness is actually hurting science by attracting attention away from a salient issue. Also, the presence of sloppiness in the literature is bad because it becomes self-justifying.

    ReplyDelete
  3. Irresponsibility is much worse than irreproducibility, and should be clearly distinguished. Whenever the topic is not very "hot", the reality is that most results would never be even attempted to be reproduced or checked by others. So any incorrect statement, after it is published, has a high risk of remaining as the only truth for years. On the other hand, a missing detail in a correct paper is not such a big deal, as it does not affect the majority of readers and can be simply solved by e-mailing the author.

    ReplyDelete

Emergence and protein folding

Proteins are a distinct state of matter. Globular proteins are tightly packed with a density comparable to a crystal but without the spatia...