Sunday, July 4, 2010

OPV cell efficiency is an emergent property

As discussed in a previous post, the efficiency of organic photovoltaic (OPV) cells appears to be largely determined by solid state (and thus collective) effects such as aggregation, sample morphology, and disorder. A striking example of this is that the efficiency of a cell can be improved significantly by annealing the thin film (i.e., just taking the film and slowly heating and then cooling it). Hence, efficiency is an emergent property and reductionist theoretical approaches that focus on the properties of isolated constituent molecules have debatable value.

At the I2CAM workshop this past week the most disappointing presentation was that from the Harvard clean energy project, led by Alan Aspuru-Gizek . This very ambitious project aims to using the world wide grid of computers (including your own PC) to run quantum chemistry codes to calculate properties of hundreds of thousands of molecules to screen them as candidates for use in OPVs. However, it must be stressed that almost all of these calculations will be on small single and isolated molecules in the gas phase.

It was claimed that one could screen for high charge mobility materials by looking at delocalisation of frontier orbitals and the reorganisation energy associated with ionisation. However, the particularly relevant quantity is the reorganisation energy of the environment of the molecule.

The speaker claimed something like "we are our own harshest critics" and listed possible weaknesses of the project. These were most concerned with whether approaches based on density functional theory (DFT) are adequate for calculating the relevant properties of these molecules. (Many people would say they are not). However, I contend that even if one could calculate exactly the properties of single molecules in the gas phase one would be a long way from being about to determine which molecules will be the best candidates for OPVs.

I asked for a specific example of where such a computational approach has been successful for any area of science and technology. It was stated that drug companies do this all the time when screening. However, I contend the physics and chemistry of that problem is much simpler and more well defined. One knows a specific active site of a protein that ones want to find a small molecule to bind to the hinder the activity at that site. This is a ground state and very local property. In contrast, for photovoltaics excited states, dynamics, and collective effects are involved, and the relevant large scale structures are not well defined. Exactly how the properties of OPVs are related to the properties of the constituent molecules is so poorly understood I am skeptical that a brute force computational approach is going to lead to much progress.

15 comments:

  1. I am not sure that the technique has been all that successful in drug research, though there may have been a few well-known successes. In this case, the enormous cost of pharmaceutical research means that you can afford to get an awful lot wrong and still save money on development. Judging by the enormous number of me-too drugs under development, I might suggest that such techniques are most useful only in a very small neighborhood of a reference structure that is already known to work.

    ReplyDelete
  2. To add to the above, I guess what it means is that you can expect a lot of materials to be suggested that are similar to what is already used (i.e. P3HT:PCBM). The big problem that I heard between the lines at the conference was that nobody really knows why these are by far the best materials known even right now...

    ReplyDelete
  3. Dear Ross,

    Let me try to clarify some of the statements you made in your post:

    The goal of the Clean Energy Project (CEP) is to identify molecular motifs, oligomer sequences, as well as structure-property relations which look promising for OPVs. Our work as quantum chemists is focused on this one aspect of the problem, while we defer other issues (e.g., nano-/microstructure, architecture) to the respective experts.

    You claim that there is no merit in the quantum chemical in-vacuum characterization of potential OPV molecules/oligomers, as the performance is determined by condensed matter physics effects. While I appreciate your criticism and share your concerns to a certain degree, I feel that your judgment is neither overly measured nor balanced.

    Our perspective is not a reductionist one, but we rather see our investigation as a sensible STARTING POINT for the development of new materials: A suitable molecular electronic structure is a NECESSARY requirement for a good material - we certainly do not claim that it is a sufficient one! But you would not try to build an OPV based on any organic molecule (say glucose or hexane) because from the molecular properties alone you already know that this is futile. At the same time the space of molecular structures currently explored for OPVs is extremely limited, and we feel that here is a lot of room for improvement. Much remains to be learned, and this is what the CEP is systematically aimed at.

    Promising candidates can then be combined and tweaked with all the necessary tools that address the condensed phase aspects of a material to unlock its full potential (e.g., by adding side chains to achieve a desired packing).

    I am actually confused about the fact that you praise Seth’s presentation (I also liked it!) but do not see the contribution of the CEP - both projects are very much in the same spirit. As it is, a large part of the computational work done in the field of OPVs use similar approaches as found in the CEP, alas less systematic, on a much smaller scale, and often lower level of theory. Prof. Reimers’ hands-on computational chemistry workshop at I2CAM is a nice example.

    At the end of the day, success or failure will show whether this project was worthwhile or not. For the time being, I am actually content with the fact that we receive great interest and positive feedback from experimentalists, which is more than many theoreticians can claim.

    Best wishes

    Johannes

    P.S. I think the QSAR community would disagree with your assessment that drug discovery is a rather trivial enterprise.

    ReplyDelete
  4. Although the approaches Johannes and I described are aimed at the same goal, the approaches are very different. One of the underlying points that I tried to make was that structure-property relationships that span analogous series of molecules correspond to distinct continuous solutions of an ultimately non-linear self-consistent field problem. What led me to make the connection between the resonance theoretical models of Platt and Brooker and the SA-CASSCF solution that I described was the identification of an analogy between the structure of the solution and the concepts represented in the parameters of the theory. I would never have identified the correspondence if I had taken a blind approach.

    In the case of the CEP, the sheer number of compounds seems to make it unlikely that such patterns could be identified, because it is not feasible to have a human "look at" a million calculations. The structure property relationships, if they are to be found, must be of the most obvious sort, so much so as to leap directly out of the primary observables. It is likely that there are patterns that will be missed, because it is very, very hard to train a data mining program to do the sort of pattern recognition that humans find trivial. Indeed, if you know what to train the data miner to look for, then why is a combinatorial approach needed in the first place?

    Some of the methods that were on the list to be implemented make little sense without adequate sampling of the possible solution space. Indeed, this is the whole POINT of Gill's Maximum Overlap Method (MOM). I cannot see how this method could be used productively without a very sophisticated technology to identify and categorize the distinct solutions. And yet, it was given a high profile (indeed, specifically mentioned) on the list of methods that will be implemented in the CEP approach...

    Then there's SA-CASSCF. The issue of multiple solutions is really, really in your face when you use this method - and this is actually GOOD. The power of SA-CASSCF lies in this feature. It is a great tool for identifying model spaces for multi-electronic-state problems. Ignoring this would be an abuse of the method. The parameters characterizing a single SA-CASSCF calculation are combinatorial themselves. Just think of all the possible ways of choosing active orbitals for all possible active space dimensions and all possible state averaging schemes... I have an extremely hard time believing that this method could be productively used in the described context. And yet, it too was on the list...

    I think I (and others) may find it useful to have a publicly accessible clearing house for TD-DFT calculations on several million randomly chosen organic moledules. However, probably the best use of this database will ultimately be to understand the pros and cons of TD-DFT, not to understand how to make better OPVs.

    ReplyDelete
  5. BTW, "CEP"="Clean Energy Project"

    ReplyDelete
  6. Dear Seth,

    I referred to your talk as it was also concerned with molecular calculations which Ross considers useless in the context of OPVs.

    Your post was mostly concerned with HOW we do things, primarily with the employed methods and the large-scale approach.

    Your statement that structure-property relations are based on analogous wavefunctions is quite evident.
    I agree that DFT is not an ideal method for OPVs (in particular for excited states) but:
    1) It is the best first principles method available for a large-scale screening (even today many studies use much simpler methods).
    2) We treat promising candidates at a higher level of theory, including MR-PPP, CAS, and LCC. Again, these are not planned for all systems but only for the ones of highest interest (100s rather than millions).
    3) Your preferred CASSCF has its own distinct weaknesses: besides completely ignoring dynamic correlation, your active spaces can only cover a severely limited/incomplete part of the relevant valence space. By state-averaging you furthermore span qualitatively different states in the same 1- and n-particle basis, which (in combination with the lacking dynamic correlation) can lead to a very unbalanced description of your states. In addition, it is not a scalable method, nor is it usually extensive.

    Again, this is a fair and important discussion, and we have to keep aware of its implications. But the bottomline is that there is no golden bullet. We all try to use the best methods available/affordable and validate against potential alternatives – that’s exactly what we do in the CEP.

    I disagree with your perspective that the CEP produces too much data to be useful. You can freely concentrate on any subset you may be interested in, and have the option to come back to the complete dataset to validate your findings.
    You are right that we don’t primarily analyze systems one-by-one, but rather use a data-mining/statistics package to make sense of all the data. We use a dual approach to correlate data: on the one side we let the package search blindly for correlations, on the other side we, our collaborators, and eventually the public can specify where to look for them.
    I don’t see a reason why we should be restricted to only finding obvious patterns, but in any case, there is a difference between knowing what to look for and knowing a solidly verified, quantified, and refined answer.

    I’d like to make one more comment: Our search is neither random nor blind! Our molecular library is combinatorially (=systematically) generated from molecular fragments deemed most promising by our experimental collaborators. Beyond this we use our gathered insights to design further candidates.


    Again, finding better molecular motifs is only one aspect in the development of better plastic solar cells. But we believe that we should optimize this starting point to really make all other efforts count. Our goal is to suggest such alternatives beyond the handful of systems we have been tinkering with for the last 10+ years.

    Best wishes

    Johannes

    ReplyDelete
  7. My original post apparently got deleted by accident, so here is the repost:

    Dear Ross,

    Let me try to clarify some of the statements you made in your post:

    The goal of the Clean Energy Project (CEP) is to identify molecular motifs, oligomer sequences, as well as structure-property relations which look promising for OPVs. Our work as quantum chemists is focused on this one aspect of the problem, while we defer other issues (e.g., nano-/microstructure, architecture) to the respective experts.

    You claim that there is no merit in the quantum chemical in-vacuum characterization of potential OPV molecules/oligomers, as the performance is determined by condensed matter physics effects. While I appreciate your criticism and share your concerns to a certain degree, I feel that your judgment is neither overly measured nor balanced.

    Our perspective is not a reductionist one, but we rather see our investigation as a sensible STARTING POINT for the development of new materials: A suitable molecular electronic structure is a NECESSARY requirement for a good material - we certainly do not claim that it is a sufficient one! But you would not try to build an OPV based on any organic molecule (say glucose or hexane) because from the molecular properties alone you already know that this is futile. At the same time the space of molecular structures currently explored for OPVs is extremely limited, and we feel that here is a lot of room for improvement. Much remains to be learned, and this is what the CEP is systematically aimed at.

    Promising candidates can then be combined and tweaked with all the necessary tools that address the condensed phase aspects of a material to unlock its full potential (e.g., by adding side chains to achieve a desired packing).

    I am actually confused about the fact that you praise Seth's presentation (I also liked it!) but do not see the contribution of the CEP - both projects are very much in the same spirit. As it is, a large part of the computational work done in the field of OPVs use similar approaches as found in the CEP, alas less systematic, on a much smaller scale, and often lower level of theory. Prof. Reimers' hands-on computational chemistry workshop at I2CAM is a nice example.

    At the end of the day, success or failure will show whether this project was worthwhile or not. For the time being, I am actually content with the fact that we receive great interest and positive feedback from experimentalists, which is more than many theoreticians can claim.

    Best wishes

    Johannes

    P.S. I think the QSAR community would disagree with your assessment that drug discovery is a rather trivial enterprise.

    ReplyDelete
  8. Johannes,

    Actually, the excitation energies were calculated by multi-state MRPT2 on the SA-CASSCF reference, so "dynamical correlation" is covered (to the extent that static and dynamic correlation can be separated, which they cannot). The mixing matrix derived from diagonalizing the effective MS-MRPT2 hamiltonian is basically the identity, and no level shift is required, indicating that the active space is a good reference space. This is NOT what happens when you increase the size of the active space, suggesting that the "relevant" part of the valence space is described.

    Of course, I'm a bit confused as to what is "relevant" anyway. We already discussed to some extent why large active spaces are bad. They arbitrarily extract a sub-ensemble from a family of roughly equivalent ensembles. I believe the amount of arbitrary bias induced by selecting a too-large active space is probably measurable by the von Neumann entropy of the one-body density matrix in some way, but this idea needs to be developed further.

    It is interesting to contemplate that one of the core assumptions of quantum chemistry - mainly, that an exact solution of the Born-Oppenheimer electronic structure problem is desirable and will generate insight into chemical observables - is wrong. Although it may be a good approximation for well behaved ground state systems, it must fail when the relevant fluctuations in the parameters of the B.O. Hamiltonian are enough to mix the states. If there are any off-diagonal elements between the electronic state and the state of the environment (nuclei, solvent whatever), then the Schmidt decomposition implies that the reduced electronic state will have nonzero entropy (I am sure you know this). In this case, there MUST be a point where the pursuit of the exact solution to the B.O. problem must cease to provide useful information. It is the exact solution for the wrong Hamiltonian, so at some point the correlations will reflect what it gets wrong more than what it gets right. In this light, I am not sure that I would agree with your conception of what the "relevant" space would be... It is quite possible that a pure state in a reduced set of observables may be a closer representation to the ENSEMBLE measured in any given set of experiments.

    I didn't mean to insinuate that the project will generate more information than will be useful, I was merely placing a wager that its value will lie more in generating insight into the behavior of the theoretical models than it will into the chemistry of OPV materials. Of course, I do not always win my wagers. Maybe in a decade or so we can settle up and one of us can buy the other a beer.

    What I would actually really LIKE to see is something like what you described, but dedicated to searching for all the solutions for a given set of representative compounds - all the UHF solutions, lets say. If this was the goal, then methods like MOM would actually be really useful.

    Of course, I realize that selling such a project to granting agencies might be more difficult.

    -Seth

    ReplyDelete
  9. I guess I would add one more thing to this tirade, and that is that structure-property relationships (as they are used in chemistry) are by nature extracted from the synthesis of many different experiments. This is obvious from the commonly accepted definition of a "reaction series" as a series of reactions for which the relationship is meaningful.* Therefore, since the relationship is expected to hold for a family of similar but not identical Hamiltonians, it is nonsense to insist that the relationship should emerge from the exact solution of a Schrodinger equation.

    *To quote R.N. Levine:
    "This interpretation is saved from being a circular
    one (“a series of similar reactions is one for which the
    Brønsted relation is a useful measure”, “the Brønsted slope
    is a useful measure for a series of similar reactions”) by
    the empirical observation that there are indeed many
    known examples of the utility of the concept." (taken from J. Phys. Chem. 1979 vol. 83 (1) pp. 159-170.)

    ReplyDelete
  10. It is very easy to get sucked into acrimonious debates in online forums.

    It must be stated openly that I am quite hostile to a movement that I perceive in theoretical chemistry that would seem to relegate the field to the position of testing bed for computational technologies. I agree with a position that Ross has voiced on this blog, that computation must not be an end in itself. I think that this has led to a rot in the field, and a lack of new and interesting conceptual advances. Where there is a lack of new concepts, there is an abundance of detailed algorithmic development and an undeserved longevity of stale ideas whose inertia is maintained by a large and growing investment in existing computational and algorithmic infrastructure. I believe that theoretical chemistry has put itself in the position of the preverbial hammer-wielder that sees only nails.

    It is possible that the Harvard Clean Energy project never had a chance with me because of this. I perceived it is less about chemistry than about contriving a problem to which the world community grid can be applied.

    However, science needs funding, and that money flows to what is fashionable. It may not be me that laughs last on this one.

    ReplyDelete
  11. Dear Seth,

    Yes, CAS-PT is probably the best available method (except for DMRG-CT ;)), alas we cannot use it instead of DFT in the CEP due to its enormous cost, lack of scalability (exacerbated by the need for much larger basis sets), and the required manual selection of the active space. That’s why we only use it for selected cases.

    I actually think that static and dynamic correlation can be separated reasonably well in OPVs (but since this is a conceptual distinction, one can have different views), with the conjugated pi-backbone responsible for the former. This is what I would consider the relevant space for CAS - unfortunately too big for conventional algorithms.

    Large active spaces are not generally bad! This is clear from a variational perspective – one ultimately approaches the FCI solution and the exact answer. You are right that they CAN lead to technical problems in the PT part and – through bad luck – gaps between states can get worse if the description of different states gets more unbalanced (see e.g., http://jcp.aip.org/jcpsa6/v132/i2/p024105 and other papers by the Chan & Yanai groups). The limited active space selection is always tricky and there is no good solution.

    The BO approximation is obviously not “wrong” (otherwise it wouldn’t be used ubiquitously)! There are merely exceptions (no doubt interesting cases of vibronic coupling, conical intersections, Peierls’ distortions/metal-insulator transitions, etc), where the appropriate corrections have to be taken into consideration. How far these special cases dominate the systems studied in the CEP is up for debate.

    You are right that method validation is one important aspect of the CEP and this is where we can probably make contributions early on. In fact, we are working on a paper that introduces an OPV test set and discusses the performance of different methods for this problem.

    I think your 2nd post is based on a misunderstanding: I said that structure-property relations are a result of ANALOGOUS wavefunctions (like for a group in the PSE), not identical ones (whatever that would mean). So we completely agree.

    You lost me on your last post. The CEP certainly has a methodological side to it, i.e., to open up the field of large-scale, grid-based quantum chemistry. And we chose to apply this approach to the open problem of OPVs for which we believe we can actually contribute something – by making progress in their conceptual understanding. Just like you chose to apply SA-CASPT methodology to certain structure-property relations.
    On a more abstract note: As someone who originally comes more from the development side, I don’t understand your criticism for method development. You should not forget that all the methods you use originate from people who spend their time and effort providing them. Theoretical chemistry has many facets and they all have their rightful place.

    Finally, I don’t perceive this exchange as acrimonious and I am sorry that you see it that way. I merely tried to respond to Ross’ challenge that our work is useless by clarifying the ideas behind our project. As I said before – I like your project and think you are doing good work. I am always open for criticism – so no hard feelings from my side.

    Best wishes

    Johannes

    ReplyDelete
  12. I'm not sure that I agree with some of your statements about the utility of the variational principle. The variational principle, when applied to a known Hamiltonian, leads to the best estimate of the ground state energy that can be obtained in the model Hilbert space.

    It is less clear that it it will always lead to a better estimate of the state itself. This may depend on what you mean by "better" or "accurate".

    A good paper for a counter-argument is [Henri-Rousseau, J. Chem. Edu. 1988 vol. 65 (1) pp. 9]. This paper shows for a simple system - H2+ ion - that variational wavefunctions may lower the energy at the cost of increasing the Hamiltonian dispersion. For an exact eigenstate, the dispersion should be zero. The dispersion is a useful independent measure of accuracy of the state that does not depend on the energy expectation value.

    Of course, the self-consistent field is not the same as the variational principle, because it is not linear. The Hamiltonian and the state both emerge from the non-linear optimization process. In this case, it is even less clear that the variational energy is a good measure of accuracy.

    Some insight into this issue may be obtained by considering the self-consistent field as problem in quantum statistical estimation. For example, see [Tishby & Levine, Chem. Phys. Lett. 1984 vol. 104 (1) pp. 4-8] and also [Balian & Veneroni, Ann. Phys. 1988 vol. 187 pp. 29-78]. I think that there is much to be gained by exploring the link between commonly used quantum chemical variational problems and statistical estimation implied in these and other works. This link, as far as I can tell, has been woefully under-explored.

    Also, even though it is true that expanding the active space will eventually converge to the full CI solution, it is not clear to me that it will do so monotonically. I am not aware of any result that proves this is so. By this line of argument, it may well be that a low-rank solution is a better approximation than a higher rank one.

    ReplyDelete
  13. Keep in mind, when reading the above, that I have spent most of my career thinking about multi-state problems, for which the variational principle is usually applied only to the average energy on a subspace. Some of my thoughts on the issue are definitely coloured by this. For these problems, the statistical approach may have more value than for regular ground state estimates, since I think it is more general.

    ReplyDelete
  14. The problem that I have with defining "dynamic correlation" as e.g. sigma-pi correlation, is that this attempts to put a "cause" where no "cause" can be assigned. Jaynes points out* that there is a principle of statistical complementarity in quantum mechanics that limits your ability to determine what the origin of mixing in a subspace actually IS. It makes much more sense to me to label correlation effects by what they DO rather than what CAUSES them. For example, I usually thinnk of "static correlation" as "correlation that cannot be described by an adiabatic connection" while dynamic correlation is "correlation that can be described by an adiabatic connection". This makes no reference to the ultimate cause, but clearly distinguishes effects - in this case, whether or not there is significant remixing in some reference Hilbert space.

    *Jaynes Phys. Rev. 1957 vol. 108 (2) pp. 171-190

    ReplyDelete