Wednesday, January 10, 2018

Should we be concerned about irreproducible results in condensed matter physics?

The problem of the irreproducibility of many results in psychology and medical research is getting a lot of attention. There is even an Wikipedia page about the Replication Crisis. In the USA the National Academies have just launched a study of the problem.

This naturally raises the question about how big is the problem is in physics and chemistry?

One survey showed that many chemists and physicists could not reproduce results of others. 

My anecdotal experience, is that for both experiments and computer simulations, there is a serious problem. Colleagues will often tell me privately they cannot reproduce the published results of others. Furthermore, this particularly seems to be a problem for "high impact" results, published in luxury journals. A concrete example is the case of USO's [Unidentified Superconducting Objects]. Here is just one specific case.

A recent paper looks at the problem for the case of a basic measurement in a very popular class of materials.

How Reproducible Are Isotherm Measurements in Metal–Organic Frameworks? 
 Jongwoo Park, Joshua D. Howe, and David S. Sholl
We show that for the well-studied case of CO2 adsorption there are only 15 of the thousands of known MOFs for which enough experiments have been reported to allow strong conclusions to be drawn about the reproducibility of these measurements.
Unlike most university press releases [which are too often full of misleading hype] the one from Georgia Tech associated with this paper is actually quite informative and worth reading.

A paper worth reading is that by John Ioannidis, "Why most published research findings are false", as it contains some nice basic statistical arguments as to why people should be publishing null results. He also makes the provocative statement:
The hotter a scientific field (with more scientific teams involved) the less likely the research findings are to be true.
I thank Sheri Kim and David Sholl for stimulating this post.

How serious do you think this problem is? What are the best ways to address the problem?

3 comments:

  1. What makes science true ?
    A video by Stanford Meta Research Institute. They call it Research on Research .

    Prof John Ioannidis whom you have mentioned is one of the members of this institute.

    https://www.youtube.com/watch?v=NGFO0kdbZmk

    The video at 7.06 to 7.08 mins gives a handwritten slide " Evidence of reproduciblity problem. A survey data for scientific fields. Its depressing. Chemistry 87% ? leads the pack. All fields are above 50% . They should explain as to how they arrived at this percentage.

    ReplyDelete
  2. Belated response: I concur mostly with clodovendro. I would like to add that I think the bigger problem is the number of explanations being put forward that do not withstand the test of time.

    In principle this is a normal thing in science, but I perceive that the fraction of hyped explanations is increasing, potentially related to impact factor hunting by both authors and glossy journals.

    And if explanations put forward with (otherwise proper) data are increasingly unreliable, the public will increasingly distrust science/scientists. An explanation is after all that "what the scientist says" apart from "what he measured".
    If data are not trusted, it's bad, but if (even for good data) the scientist can't be trusted to draw the proper conclusions, then all is lost.

    I think that is a bigger problem than unreliable data as it is harder (i.e. it takes longer) to refute an explanation than to do experiments showing the data don't reproduce.

    ReplyDelete

From Leo Szilard to the Tasmanian wilderness

Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...