This is the provocative title of a very nice paper by Laughlin and Pines in PNAS back in 2000. They point out that in principle Schrodinger's equation from quantum mechanics and Coulomb's law of electrostatics is ‘The Theory of Everything’ since these equations determine all of chemistry and all the properties of all matter that we encounter everyday. Yet, due to limited computational resources even the most powerful supercomputer can only solve these equations and make predictions for systems containing at most ten particles. However, even if we had a supercomputer that could treat Avogadro's number (i.e., 10^23) of particles that would not help. Such a computer would require more atoms than there are in the universe.
First, doing the calculations would be just like doing an experiment. It would be a ‘black box’ that would give little insight into the origin of the phenomena. Morever, such calculations on finite systems cannot predict phenomena such as broken symmetry and the exact quantisation of quantities such as the quantum Hall resistance, the magnetic flux associated with a vortex in a type II superconductor, or the circulation associated with a vortex in superfluid Helium. If we ‘know the answer’, i.e., expect broken symmetry, then we can ‘jig’ the equations so we can get the answer out. But this is a posteriori not a priori reasoning.
Phenomena such as the quantisation of magnetic flux of vortices in type II superconductors present a problem for methodological reductionism. Even though Ginsburg-Landau theory is only approximate and does not require a detailed knowledge of the underlying quantum dynamics of the constituent atoms and electrons in a superconducting metal it predicts exactly the value of the magnetic flux. This is because of the principle of broken symmetry.
Subscribe to:
Post Comments (Atom)
Emergence and protein folding
Proteins are a distinct state of matter. Globular proteins are tightly packed with a density comparable to a crystal but without the spatia...
-
Is it something to do with breakdown of the Born-Oppenheimer approximation? In molecular spectroscopy you occasionally hear this term thro...
-
If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new mat...
-
I welcome discussion on this point. I don't think it is as sensitive or as important a topic as the author order on papers. With rega...
We are a group that is challenging the current paradigm in physics which is Quantum Mechanics and String Theory. There is a new Theory of Everything Breakthrough. It exposes the flaws in both Quantum Theory and String Theory. Please Help us set the physics community back on the right course and prove that Einstein
ReplyDeletewas right! Visit our site The Theory of Super Relativity: Super Relativity
Hi Ross,
ReplyDeleteThis is a great blog, full of informative and thought provoking posts. Sorry to be commenting on a slightly old post, but you update this much more frequently than I have time to visit!
I have a couple of comments on this post. Firstly, while I agree that computer simulations might not, of themselves, give much insight into the origin of a particular phenomena, I think they can still be extremely useful. It seems to me that they ought to be able to be of some use in uncovering phenomena such as broken symmetry - surely if a numerical simulation is `just like doing an experiment,' it can be no worse than an experiment in uncovering some observable phenomenon. (This leaves aside, for now, the question of how long the simulation takes to run, but see my comments further down on this). In particular, experimentalists can, in a sense, observe a broken symmetry in an experiment on a finite sized system. Arguably, all such experiments have been performed on finite systems; and yet the existence of broken symmetry phases (or at least the validity of the concept as a useful limiting case) is not in dispute.
Furthermore, there are circumstances where a computer simulation might be more useful than a particular experiment, in terms of determining an explanation for a particular phenomenon. This is because modern digital computers hardly ever make mistakes - noise acting on the computer causes a change in the state of the computer only very, very rarely. So, if a computer simulation gives an answer that disagrees with what is observed in experiment, this can only be for two reasons: either the algorithm we used has some inherent numerical instabilities, or the assumptions that we put into our physical model were incorrect. The former can often be ruled out by a sort of mathematical reasoning that is quite independent of the phenomenon under consideration. This means that computers can be very useful in testing our intuitions about what should go into the `essential physics' of the phenomenon we are trying to understand.
I believe this is somewhat different from actual experiments - in a given experiment one can never be sure wether one is observing the `true' effect, or some artifact of the particular setup (which could be due to the way noise is affecting the system). For this reason, it is useful to have multiple probes of a particular phenomenon.
To give an example, it seems to me (as a casual outside observer) that there has been significant debate between condensed matter physicists as to whether the essential physics of high temperature superconductors is contained within the 2D Hubbard model, or whether additional interactions are required. If we could simulate the Hubbard model on a computer, such questions could be settled immediately, and theorists could focus their efforts in explaining *how* superconductivity arises from such a model.
One could call this a (weak) form of reductionism, which does not `explain' an emergent phenomenon, but does use a `lower strata' to glean some useful information about it.
The obvious issue with the above is that modern digital computers are not really up to the task of exactly simulating (for example) 2D Hubbard models with sufficient numbers of particles to see emergent phenomena like superconductivity. However, we know that, in principle, quantum computers can be made to have the equivalent property as classical digital computers of being robust to the noise acting on them (provided it is sufficiently weak, of course). Thus, if we had a QC, we could run simulations of model systems such as Hubbard models, and reasonably confident that the output of the simulation is not some artifact of the noise acting on the computer. To my mind, this is one of the most important motivations for developing quantum computers.
Another (non argumentative) comment: I would be really interested to see in detail how the principle of broken symmetry leads to exact flux quantisation. Is there any chance you could write an accessible blog exposition of this?
ReplyDeleteThanks,
Sean Barrett