Wading through AI hype about materials discovery

 Discovering new materials with functional properties is hard, very hard. We need all the tools we can from serendipity to high-performance computing to chemical intuition. 

At the end of last year, two back-to-back papers appeared in the luxury journal Nature.

Scaling deep learning for materials discovery

All the authors are at Google. They claim that they have discovered more than two million new materials with stable crystal structures using DFT-based methods and AI.

On Doug Natelson's blog there are several insightful comments on the paper about why to be skeptical about AI/DFT based "discovery".

Here are a few of the reasons my immediate response to this paper is one of skepticism.

It is published in Nature. Almost every "ground-breaking" paper I force myself to read is disappointing when you read the fine print.

It concerns a very "hot" topic that is full of hype in both the science and business communities.

It is a long way from discovering a stable crystal to finding that it has interesting and useful properties.

Calculating the correct relative stability of different crystal structures of complex materials can be incredibly difficult.

DFT-based methods fail spectacularly for the low-energy properties of quantum materials, such as cuprate superconductors. But, they do get the atomic structure and stability correct, which is the focus of this paper.

It is a big gap between discovering a material that has desirable technological properties to one that meets the demanding criteria for commercialisation.

The second paper combines AI-based predictions, similar to the paper above, with robots doing material synthesis and characterisation.

An autonomous laboratory for the accelerated synthesis of novel materials

[we] realized 41 novel compounds from a set of 58 targets including a variety of oxides and phosphates that were identified using large-scale ab initio phase-stability data from the Materials Project and Google DeepMind

These claims have already been undermined by a preprint from the chemistry departments at Princeton and UCL.

Challenges in high-throughput inorganic material prediction and autonomous synthesis

We discuss all 43 synthetic products and point out four common shortfalls in the analysis. These errors unfortunately lead to the conclusion that no new materials have been discovered in that work. We conclude that there are two important points of improvement that require future work from the community: 
(i) automated Rietveld analysis of powder x-ray diffraction data is not yet reliable. Future improvement of such, and the development of a reliable artificial intelligence-based tool for Rietveld fitting, would be very helpful, not only to autonomous materials discovery, but also the community in general.
(ii) We find that disorder in materials is often neglected in predictions. The predicted compounds investigated herein have all their elemental components located on distinct crystallographic positions, but in reality, elements can share crystallographic sites, resulting in higher symmetry space groups and - very often - known alloys or solid solutions. 

Life is messy. Chemistry is messy. DFT-based calculations are messy. AI is messy. 

Given most discoveries of interesting materials often involve serendipity or a lot of trial and error, it is worth trying to do what the authors of these papers are doing. However, the field will only advance in a meaningful way when it is not distracted and diluted by hype and authors, editors, and referees demand transparency about the limitations of their work.  


Comments

  1. I quite agree with your points - except for the cuprate remark. Yes, dft methods fail spectacularly *for the low energy electronic structure*. The atomic structure is generally properly described, though. And that is what the focus was of these papers (structure and stability).

    ReplyDelete
  2. Thanks for the comment. I agree. I have modified the post.

    ReplyDelete
  3. Double blind peer reviewing is necessary for science journals since large amount of money is involved for publishing. One cannot understand when social sciences journals have double blind peer review , why it is not a practice in science journals. For example , Nature group has these intermediate editors ( new creation for this century) , They send it for peer review. As soon as they see the country and from where the paper is coming from (single blind) there will be a bias to dispatch the article for peer review.
    The whole publishing regime is in a flux , with retractions and the rise of science integrity specialists. Science is evidence based and social sciences is opinion based ( unless a Pierre Bourdieu comes through with good social theories) . Social sciences journals seem to follow a better path for integrity . Evidence based science journals should have double blind peer reviewing.

    ReplyDelete
  4. Thanks for the comment. I agree that double blind peer review would help, but only in a small way. There are many other contributing factors to the problems with luxury journals. Thanks for introducing me to Pierre Bourdieu and his work. Sounds like a great scientist.

    ReplyDelete
  5. Pierre Bourdieu is one of the many critical thinking 'social scientists" France has produced.

    ReplyDelete

Post a Comment

Popular posts from this blog

Is it an Unidentified Superconducting Object (USO)?

What is Herzberg-Teller coupling?

What should be the order of authors on a conference poster or talk?