Thursday, February 4, 2010

Nature publishes 17 parameter fits to 20 plus data points

Today's issue of Nature has a paper which features the Figure above. According to the Supplementary Material the curves shown involve 17 free parameters. The authors showed me a draft of the paper and I pointed this out to them. It was this experience which inspired the "curve fitting" posts on this blog.


  1. Quite aside from the humour of this, surely there are a large number of fits that provide an equally good match, given that many free parameters? I haven't read the article though :)

  2. There is a problem with the way that science (and particularly physics) is taught these days. Something is being left out. That something is the dark art of statistical inference.

    There are many reasons why statistical inference should be taught to physicists. You have just highlighted one of these.

    The problem that you are referring to has a nice geometric interpretation. In a multidimensional space, distance behaves differently. In a seventeen-dimensional space, you don't have to go far to get a large distance. I suggest that the fit we see here in the figure, measured by an information distance, might actually be not so good.

    This also provides a quantitative version of what Joel is saying above. The question of "how many other parameter choices would fit just as well" becomes "what is the circumference of the circle at constant radius from the data set".

    A second reason to learn statistical inference (and particularly, geometric formulations of this), is that QUANTUM MECHANICS IS A GENERALIZATION OF STATISTICAL INFERENCE! This is pretty well recognized now in quantum informatics, as far as I can tell, but has yet to sink into other fields (in particular, quantum chemistry - a thorn in my side).

    If we don't wish this behavior to continue, then there really only is one way out - start teaching probability theory and statistical inference to your students!

    For more information on what I am talking about, here are some good references that changed my life:

    Jaynes, E.T. "Information Theory and Statistical Mechanics I" Phys. Rev. 106, p.620 (1957)
    Jaynes, E.T. "Information Theory and Statistical Mechanics II" Phys. Rev. 108 p.171 (1957)
    Streater, R.F., "Classical and Quantum Probability", J. Math. Phys. 41 p.3556 (2000)
    Levine, R.D. "Geometry in Classical Statistical Thermodynamics" J. Chem. Phys. 84 p.910 (1986)
    Bengtsson, I. and Zyckowski, K. "Geometry of Quantum States: An Introduction to Quantum Entanglement" Cambridge University Press, 2006

    There are more, but it would take too much time.


  3. I can only count 16 data points in each of the curves.

    I will point out that all B.Sc. students at UQ are now required to take a first year course in statistics. However, I did hear comments from some physics students that they learnt more statistics in the lab of the first semester physics course than they did in the entire second semester statistics course. I can only presume they were exaggerating.


    Sorry, this is yet another attempt to identify the meaning of QM that has gone wrong. QM is a domain in which you can perform statistical inference. You can even take some of the formal features of QM and relocate them within your inferential methods (e.g. noncommutative probability theory), though this formal trick explains nothing. But QM per se is not a "generalization of statistical inference". It's like saying farming is a generalization of arithmetic.

  5. It sounds like we should talk more, Mitchell. I'm keen to learn more.

    I think that what I am saying is not unreasonable. Others have said it before quite directly (see, for example, the second Jaynes paper, the Streater paper, or the Bengtsson & Zyckzkowski book).

    It seems like, since the invention of QM, people have been complaining that it does not mandate an objective view of "reality" in the way that classical physics apparently does. They are right, as far as I can tell. Therefore, I would ask: what is quantum mechanics other than the mathematical apparatus? If there is nothing else required, then equating quantum mechanics and statistical inference doesn't seem very outlandish.

    It is very hard for me to see what there really is in quantum mechanics besides the machinery (maybe h-bar?). If I say instead "quantum mechanics is a generalization of classical probability theory" does that make you feel better?

    As far as I can see, there are only two things that make quantum mechanics separate from classical probability: 1) non-commutativity of observables and 2) h-bar. Note that it is the value of h-bar that is the value of h-bar that is important, not its existence, because that is already implied by the non-commutativity of x and p (it had to be something nonzero after all...).

  6. I agree there seems to be a paucity of data to make such a fit. Furthermore there are no error bars on what appears to be very sparse experimental data!

    To be fair, the fit is using a four frequency damped oscillator model that one might expect to apply to these data, and other criteria such as anti-correlations are expected and observed. However, I would have liked to have seen the predictive power of their model tested on a data set extending over longer times. After all it is published in Nature.

  7. Dr. McKenzie, do you know of a simpler funcitonal form which would fit smoothly with those data points ?? Based on the number of peaks and inflections, 11-degree polynomials might work, but those would blow up at the ends, making them physically unrealistic (bad for extrapolation).

    Of course, the sinusoidal functions they used aren't likely to extrapolate well either, but they'd extrapolate much better than 11-degree polynomails.

    I don't think oscillations of cross-peak amplitudes would follow a very consistent pattern (easily modeled by a simple functional form) in vivo. The main point was that oscillations occur, and I think they did a good job of conveying that =)

  8. Seth, first let me agree that there is a lot of insight to be had in examining such topics. The problem lies entirely in the claim that here at last QM itself has been explained.

    I think I see two problems in what you said. The first is that QM is a hypothesis whereas statistical inference is a method of hypothesis generation. You can perform further inferences having adopted the quantum hypothesis, but assuming QM is not itself a method of inference. And conversely, we do not have a "quantum-neutral" method of inference which leads naturally to QM.

    The second problem is the tendency to regard QM as having been explained by transposing the quantumness to some non-physical part of the theory (logic, probability). The rhetorical manoeuvre here is to say: now we understand QM; all we have to do is to accept that nature works according to "quantum logic" or "quantum probability". But in fact these are just different perspectives on the thing that needs explaining. Classical logic and classical probability have comprehensible interpretations (e.g. frequentism and subjectivism for probability). There is no such interpretation available for a probability amplitude or a negative quasiprobability, or their more recondite counterparts (such as Streater's "generalization of probability" with "no sample space"). This is indeed QM as mathematical apparatus; a conceptual explanation beyond "do this and it makes correct predictions" is lacking.

    Occasionally you find someone like Jaynes claiming to have re-derived QM from a non-quantum probability theory. Maybe you have such derivations in mind as well. Hitherto they have all either been wrong or they required something like Bohm. In principle I agree with this approach (derive QM from a classically probabilistic theory); I cannot make sense of quantum logic or quantum probability as anything but formal games; I think QM has to be explained by new ideas on the physical side, e.g. ideas about the planck-scale causal structure of space-time. "Quantum probability theory" might be a perspective that eventually helps us figure things out, but it is not itself the answer.

  9. OK. It sounds like I should have a chat to you about this. I have been reading a lot over the past year or two, and many ideas have settled in my brain like last week's laundry on the floor. It sounds like you might be able to help me organize the heap a bit better.

    Some of the non-interpretability of concepts that you allude to sounds new to me. It is possible (nay, likely) that I missed a subtlety in one or more of the readings that I have done. I do understand that negative quasi-probability (this is a big issue with Wigner functions, right?) will be a problem with the axioms of probability as I understand them. Levine seems to say that classical amplitude-like quantities come out of geometric statistics even before "really quantum" (i.e. complex structure) is invoked, but I only found that paper this christmas, and I find Levine usually takes a bit of chewing to really extract the flavor.

    After sleeping on it (fitfully), I think my real point here was that teaching reasonably heavy-duty statistical inference concepts makes sense in a physics curriculum because there is sufficient overlap between the concepts here and in quantum theory that learning them together can really help. It would also have carry-on benefits re: developing skills to quantitatively understand how and why, and when model fitting and parameter estimation can be trusted. The latter is clearly an important skill to scientists, but also to anyone who watches the evening news.

    So, are you here in Brisbane? How would I get in contact to bug you more?

  10. I think if the model is physically reasonable, and the predictions are made stronger by alternative analyis this kind of fitting is fine. The supplementary information has much stronger assertions I believe.