Wednesday, November 11, 2015

The robustness of any materials computation needs to be tested

A blessing and curse is the easy and wide availability of powerful software for computational materials modelling: classical molecular dynamics, quantum chemistry, density functional theory based methods, …
Although easy to use, interpreting the results, and establishing their robustness and reliability can be subtle and challenging.

Any computation requires the user to make many choices from an alphabet zoo.
For example, for classical molecular dynamics simulations of water there is a multitude of force fields (TIP3P, SPC/E, TIP4P-D, …).
For density functional theory (DFT) one has to choose between LDA, GGA, and  different density functionals (B3LYP, PBE, ….).
For plane wave approaches one must choose the energy cutoff.
For quantum chemistry of molecules one has to choose the basis set (STO-3G, cc-pVDZ -Double-zeta,... ) and the level of theory for treating electron correlations (HF, MP2, CCSD, CAS-SCF, …)
If one does combined quantum-classical simulations (e.g. for a chromophore in a solvent or protein) one has to choose the quantum-classical boundary (i.e. how much of the sub-system to treat fully quantum mechanically)…

I encounter two extremes that are disconcerting and unhelpful.

1. Someone has a favourite and specific choice and does all of their calculations with this single choice for a specific system.
They may justify this by giving one or more references that they claim have systematically established this is the best choice.
The problem is that they may be sweeping under the rug that different choices may give significantly different results, possibly not just quantitively but also qualitatively.

A dramatic example is in any system with moderate strength hydrogen bonds. As discussed here, the energy barrier in the proton transfer potential can vary dramatically with the level of theory or density functional that is used. In general the more interesting the system (usually the more complex and the existence of new and interesting physics and/or chemistry) (e.g. because of the presence of strong correlations) the more likely that “standard” choices will be problematic.

Users really should be making some alternative choices from the alphabet zoo to test and justify the reliability of their results.

2. Someone does an exhaustive study using a plethora of methods and choices.
This is probably motivated by the dream of Jacob's ladder.
Their paper has lots of tables and results and comparisons but there is little insight about the relative merits of the different possible choices. Furthermore, sometimes it is claimed that one choice may be better than the rest just because it gets one or maybe a few experimental numbers "correct." To me these may just be a Pauling point.
It is also disturbing that I encounter papers that compare methods but do not contain comparisons with experiment, even when there is data available. (I gave a specific example here).

Another basic point that I (and others) have made before is that the paper needs to give enough specifics of the technical details so that the calculations and results can be reproduced by readers.
Too often I am told by people that they have not been able to reproduce the results in someone else’s paper. This does not inspire confidence. It is also worrisome how some people actually deliberately omit special "tricks of the trade" so they can keep ahead of their "competitors".

4 comments:

  1. I completely agree.
    An addition: as an experimentalist (who often works with theorists), I see quite a few experimentalists doing their own DFT calculations to supplement their work.
    I often find that those results are not up to the standards of the theory community; calculations often stop when (fortuitous) agreement is reached, without regard for the validity of the assumptions or an investigation in whether one has only reached a local minimum.

    With the exception of a few highly skilled "dualists" doing both thoory and experiment, I think this is a problem.
    After all I would not let you do an experiment on my systems after reading the manual...

    ReplyDelete
    Replies
    1. Thanks for the helpful comment.
      You confirm my worst fears about some of the experimental papers I see containing DFT calculations.

      Delete
    2. With the operative word in your response being "some" as in "not all"...

      I don't want to soil my "nest" (the experimental community); I do want to stress that I know a few experimental people that I do trust in doing DFT - but they are a small, small minority.

      (And my observations are a statistical sampling by a single person...)

      So, as always, papers should be evaluated on their contents, not on whether an experimentalist dabbles in some theory or not. But from your attitude on this blog, I think that's what you mean by "some experimental papers" (without theorist co-authors).

      Delete
    3. Thanks for the clarifying point.
      I certainly agree with you.

      Delete