Tuesday, May 23, 2017

How should undergraduate quantum theory be taught?

Some of my colleagues and I have started an interesting discussion about how to teach quantum theory to undergraduates. We have courses in the second, third, and fourth years. The three courses have independently evolved, depending on who teaches each. Some material gets repeated and other "important" topics get left out. One concern is that students seem to not "learn" what is in the curriculum for the previous year. The goal is to have a cohesive curriculum. This might be facilitated by using the same text for both the second and third-year courses.
This has stimulated me to raise some questions and give my tentative answers. I hope the post will stimulate lots of comments.

The problem that students don’t seem to learn what they should have in pre-requisite courses is true not just for quantum. I encounter second-year students who can’t do calculus and fourth-year (honours) students who can’t sketch a graph of a function or put numbers in a formula and get the correct answer with meaningful units. As I have argued before, basic skills are more important than detailed technical knowledge of specific subjects. Such skills include relating theory to experiment and making order of magnitude estimates.

Yet, given the following should we be surprised?
At UQ typical lecture attendance probably runs at 30-50 per cent for most courses. About five per cent watch the video. [University policy is that all lectures are automatically recorded]. The rest are going to watch it next week… Only about 25 per cent of the total enrolment in my second-year class are engaged enough to be using clickers in lectures. Exams are arguably relatively easy, similar to previous years, usually involve choosing questions/topics, and a mark of only 40-50 per cent is required to pass the course.
I do not think curriculum reform is going to solve this problem.

Having the same textbook for 2nd and 3rd year does have advantages. This is what we do for PHYS2020 Thermodynamics and PHYS3030 Statistical Mechanics. But, some second years do struggle with it... which is not necessarily a bad thing. The book is Introduction to Thermal Physics, by Schroeder.

Another question is what approach do you take for quantum: Schrodinger or Heisenberg, i.e. wave or matrix mechanics? The mathematics of the former is differential equations, that of the latter is linear algebra. Obviously, at some point you teach both, but what do you start with. It is interesting that the Feynman lectures really start with and develop the matrix approach, beating the two level system to death...
At what point do you solve the harmonic oscillator with creation and annihilation operators?
When do you introduce Dirac notation?

I would be hesitant about using Dirac notation throughout the second year course. I think this is too abstract for many of our current students. They also need to learn and master basic ideas/techniques about wave mechanics: particle in a box, hydrogen atom, atomic orbitals, … and connecting theory to experiment... and orders of magnitude estimates for quantum phenomena.

What might be a good text to use?

Twenty years ago (wow!) I taught second (?) year quantum at UNSW. The text I used is by Sara McMurry. It is very well written. I would still recommend it as it has a good mix of experiment and theory, old and new topics, wave and matrix mechanics….
It also had some nice computer simulations. But it is out of print, which really surprises and disappoints me.

Related to this there is a discussion on a Caltech blog about what topics should be in undergraduate courses on modern physics. Currently, most "modern" physics courses actually cover few discoveries beyond about 1930! Thus, what topics should be added? To do this one has to cut out some topics. People may find the discussion interesting (or frustrating…). I disagree with most of the discussion, even find it a little bizarre. Many of the comments seem to be from people pushing their own current research topic. For example, I know it is Caltech, but including density matrix renormalisation group (DMRG), does seem a little advanced and specialised...
There is no discussion of one of the great triumphs of "modern" physics, biophysics! I actually think every undergraduate should take a course in it.

What do you cut out?
I actually think the more the better, if the result is covering a few topics in a greater depth that develops skills, creates a greater understanding of foundations, that all leads to a greater love of the subject and a desire and ability to learn more.
In teaching fourth year condensed matter [roughly Ashcroft and Mermin] it is always a struggle to cut stuff out. Sometimes we don't even talk about semiconductor devices. This year I cut out transport theory and the Boltzmann equation so we could have more time for superconductivity. This is all debatable... But I hope that the students learned enough so that they if they need to they have the background they need to easily learn these topics.

A key issue that will divide people concerns the ultimate goal of a physics undergraduate education. Here are three extreme views.

A. It should prepare people to do a PhD with the instructor.
Thus all the background knowledge needed should be covered, including the relevant specialised and advanced topics.

B. It should prepare people to do a physics PhD (usually in theory) at one of the best institutions in the world.
Thus, everyone should have a curriculum like Caltech.

C. It should give a general education that students will enjoy and will develop skills and knowledge that may be helpful when they become high school teachers or software engineers.

What about Academic Freedom?
This means different things to different people. In some ways I think that the teacher should have a lot of freedom to add and subtract topics, to pitch the course at the level they want, and to choose the text. I don't think department chairs or colleagues should be telling them what they "have" to do. Obviously, teachers need to listen to others and take their views into account, particularly if they are more experienced. But people should be given the freedom to make mistakes. There are risks. But I think they are worth them in order to maintain faculty morale, foster creativity, maintaining standards, and honouring the important tradition of academic freedom. Furthermore, it is very important that faculty are not told by administrators, parents, or politicians what they should or should not be doing. Here, we should bear a thought for our colleagues in the humanities and social sciences, particularly in the USA, who are under increasing pressure to act in certain ways.

I welcome comments on any of the above.
My colleagues would particularly like to hear any text recommendations. Books by Griffiths, Shankar, Sakurai, and Townsend have been mentioned as possibilities.

Tuesday, May 16, 2017

A radical procedure for evaluating applicants: read one of their papers!

Like most faculty I have to evaluate the scientific "performance" and "potential" of applicants for jobs, promotion, prizes, and grants. This is a difficult task because we are often asked to evaluate people who we don't know, are unfamiliar with their work in our (somewhat related) field of expertise, or are working in completely different fields we know nothing about. For example, I have been on a committee where I had the ridiculous task of evaluating people in fields such as veterinary medicine, geography, and agriculture! This is one reason why metrics are so seductive and deceptive.

Here I want to focus on evaluating applicants who work in an area that is close enough to my own expertise, according to the following criteria. I can read one of their papers and make a reasonably informed assessment of its value, significance, and validity. This is important because it is easy to loose sight of the fact that there is only ONE measure that really matters: the ability of a person to produce valuable scientific knowledge. All the metrics, invited talks, grants, hype, slickly presented grant applications, enthralling presentations, .... are not what really matters.  They are not research accomplishments. This measure can only be assessed from the actual content of papers.

I am embarrassed to admit that I am finally trying to work harder at ``practising what I preach.'' When I need to assess a credible application I try to identify just one paper that the applicant identifies as significant or for a grant application a paper that is central to the proposed project. I then look at this paper and then go back to the (copious) paperwork of the submitted application. You might think that this takes more time. But, actually it may save time because it can be so definitive to my view (positive, neutral, or negative) that I have to read and agonise less about my assessment.

I have always done this before when assessing applicants for postdocs to work directly with me, but much more irregularly for other situations.

Ideally, this should not be necessary. Rather, a good applicant would be someone whose papers we have already read because we wanted to. However, that is not the world we live in. 

Is this a reasonable approach? Do you do something like this? Any other recommended approaches?

Friday, May 12, 2017

How should you engage with Trump and Brexit supporters?

Trump's election win surprised many, including me. Most people underestimated the level of resentment towards "elites" and the "establishment", particularly among working class white voters. This was also a factor in Brexit.

This post is about how scientists and university faculty might engage more effectively with such groups. Some of the concerns I have are similar to those I have about Marching for Science.

Over the past six months I have read some articles and had some interesting conversations with people in both the USA and Australia, who either voted for Trump or would have. What surprised and shocked me was, even among some ``well educated'' people, was the level of distrust and resentment towards the "elites"? [``You should not believe anything in the New York Times... Trump is telling the truth about crime statistics... We aren't safe.. All these liberals had it coming....'']
I also found helpful the book, Hillbilly Elegy.

My wife is a US citizen and one of her relatives in the USA works at a state university and sent us a popular newspaper article, written by a local faculty member, that aims to open a dialogue with a particular demographic that voted heavily for Trump. I largely agree with the author and have some similar political and religious views. I particularly liked the sincerity of the attempt to open a dialogue on several specific issues. However, there were several aspects to the article that I thought underlies the problem we face, rather than moving towards a real dialogue. I have made the debatable decision to not link to the actual article, because of the negative comments I make about the author and I am worried some of the content and issues discussed may distract readers from my points, which I think are part of broader challenges.

We are the elite.
The author claims they are not elite because they have a working class background and never studied at, or worked at, an Ivy League university.
I disagree. The author is a full Professor and has an annual salary of US$100 K [At that state university the salary of all employees is public and can be looked up in minutes.] In addition, they and their family may receive significant health care benefits and college tuition subsidies. They will keep their job until they choose to retire in their sixties (or even later) on a generous pension.
A survey found that the average american thinks that $122K per year makes someone "rich".
I would think the author (like me) is in the global one per cent. Furthermore, they have a job where they have fun teaching and researching subjects [in the social sciences and humanities] that many (but not me) would consider are "soft" or "left wing", and of no "economic" benefit.
Let me be clear, I think the remuneration, job security, and academic freedom of faculty is generally appropriate and important. We may not be Wall Street investment bankers or highly paid political consultants. But, we should acknowledge that we are part of the elite and privileged, regardless of our backgrounds.

Why should people trust us?
The author says that Trump voters should trust and respect the expertise of faculty on scientific, social,  economic and policy matters. "You don't seem to understand that our work goes through the extremely rigorous process of peer review."
Seriously!
In physical sciences, a lot of nonsense still gets published, especially in ``high impact'' journals. The social sciences and humanities are arguably even worse.
Hype from scientists, soft action by universities on scientific fraud (this New York Times article includes the photo below of the relevant professor with his private art collection!), and excessive university administrator salaries, is not helping create trust and respect.

Don't talk down to people who are less educated.
Although, I felt the author tried hard to engage the Trump supporters I could not but help feel (perhaps unfairly) that they were talking down to their audience. ``We know what is right and what is best for you. You really aren't smart enough to understand...''
This problem can also occur when scientists interact with groups such as climate change ``skeptics'' and young earth creationists.
I know that at times I am not as patient or as diplomatic as I could be.


What do you think? Are these general concerns part of the problem?

Tuesday, May 9, 2017

Is this a reasonable exam?

I struggle to set good exam questions. One wants to test knowledge and understanding in a way that is realistic within the constraints of students abilities and backgrounds.

I do not have a well-defined philosophy or approach, except for often recycling my old questions...
I think I do have a prejudice towards two goals.

A. Testing higher level skills [e.g. relating theory to experiment, putting things in context, ...] as much as specific technical knowledge [e.g. state Bloch's theorem or solve the Schrodinger equation for a charged particle in constant magnetic field].

B. Testing general and useful knowledge. For basic undergraduate courses [e.g. years 1 to 3] the question should be one that another faculty member could do, even if they have not taught the course. Sometimes, colleagues write questions that I cannot do. You have to have done the problem before, e.g. in a tutorial. We seeming to be testing whether someone has done this course, not "essential" knowledge.

However, I am not sure I really go anywhere near reaching these goals.
Here is a recent mid-semester exam I set for my solid state class of fourth-year undergraduates.
Is it reasonable?

How do you set exam questions?
Do you have a particular approach?

Friday, May 5, 2017

Talk on "crackpot" theories

At UQ there is a great student physics club, PAIN. Today they are having a session on "crackpot" theories in science. Rather than picking on sincere but misguided amateurs I thought I would have a go at "mainstream" scientists who should know better. Here are my slides on quantum biology.

A more detailed and serious talk is a colloquium that I gave six years ago. I regret that the skepticism I expressed then seems to have been justified.

Postscript.
I really enjoyed this session with the students. Several gave interesting and stimulating talks, covering topics such as flat earth, last thursdayism, and The Final Theory of gravity [objects don't fall to the earth but rather the earth rises up to them...]. There were good discussions about falsifiability, Occam's razor, Newton's flaming laser sword, ...
There was an interesting mixture of history, philosophy, humour, and real physics.

I always find to encouraging to encounter students who are so excited about physics that they want to do something like this on a friday night.

Wednesday, May 3, 2017

Computational density functional theory (DFT) in a nutshell

My recent post, Computational Quantum Chemistry in a nutshell, was quite popular. There are two distinct approaches to computational approaches: those based on calculating the wavefunction, which I described in that post, and those based on calculating the local charge density [one particle density matrix of the many-body system]. Here I describe the latter which is based on density functional theory (DFT). Here are the steps and choices one makes.

First, as for wave-function based methods, one assumes the Born-Oppenheimer approximation, where the atomic nuclei are treated classically and the electrons quantum mechanically.

Next, one makes use of the famous (and profound) Hohenberg-Kohn theorem which says that the total energy of the ground state of a many-body system is a unique functional of the local electronic charge density, E[n(r)]. This means that if one can calculate the local density n(r) one can calculate the total energy of the ground state of the system. Although this is an exact result, the problem is that one needs to know the exchange-correlational functional, and one does not. One has to approximate it.

The next step is to choose a particular exchange-correlation functional. The simplest one is the local density approximation [LDA] where one writes E_xc[n(r)] = f(n(r)), where f(x) is the corresponding energy for a uniform electron gas with constant density x. Kohn and Sham showed that if one minimises the total energy as a function of n(r) then one ends up with a set of eigenvalue equations for some functions phi_i(r) which have the identical mathematical structure to the Schrodinger equation for the molecular orbitals that one calculates in a wave-function based approach with the Hartree-Fock approximation. However, it should be stressed that the phi_i(r) are just a mathematical convenience and are not wave functions. The similarity to the Hartree-Fock equations means the problem is not just computationally tractable but also relatively cheap.

When one solves the Kohn-Sham equations on the computer one has to choose a finite basis set. Often they are similar to the atomic-centred basis sets used in wave-function based calculations. For crystals, one sometimes uses plane waves. Generally, the bigger and the more sophisticated and chemical appropriate the basis set, the better the results.

With the above uncontrolled approximations, one might not necessarily expect to get anything that proximates reality (i.e. experiment). Nevertheless, I would say the results are often surprisingly good. If you pick a random molecule LDA can give a reasonable answer (say within 20 per cent) of the geometry, bond lengths, heats of formation, and vibrational frequencies... However, it does have spectacular failures, both qualitative and quantitative, for many systems, particularly those involving strong electron correlations.

Over the past two decades, there have been two significant improvements to LDA.
First, the generalised gradient approximation (GGA) which has an exchange-correlation functional that allows for the spatial variations in the density that are neglected in LDA.
Second, hybrid functionals (such as B3LYP) which contain a linear combination of the Hartree- Fock exchange functional and other functionals that have been parametrised to increase agreement with experimental properties.
It should be stressed that this means that the calculation is no longer ab initio, i.e. one where you start from just Schrodinger's equation and Coulomb's law and attempts to calculate properties.

It should be stressed that for interesting systems the results can depend significantly on the choice of exchange-correlational functional. Thus, it is important to calculate results for a range of functionals and basis sets and not just report results that are close to experiment.

DFT-based calculations have the significant advantage over wave-function based approaches that they are computationally cheaper (and so are widely used). However, they cannot be systematically improved [the dream of Jacob's ladder is more like a nightmare], and become problematic for charge transfer and the description of excited states.

Thursday, April 27, 2017

Is it an Unidentified Superconducting Object (USO)?

If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new material.
There is a long history of this and it is worth considering the wise observations of Robert Cava, back in 1997, contained in a tutorial lecture.
It would have been useful indeed in the early days of the field [cuprate superconductors] to have set up a "commission" to set some minimum standard of data quality and reproducibility for reporting new superconductors. An almost countless number of "false alarms" have been reported in the past decade, some truly spectacular. Koichi Kitazawa from the University of Tokyo coined these reports "USOs", for Unidentified Superconducting Objects, in a clever cross-cultural double entendre likening them to UFOs (Unidentified Flying Objects, which certainly are their equivalent in many ways) and to "lies" in the Japanese translation of USO. 
These have caused great excitement on occasion, but more often distress. It is important, however, to keep in mind what a report of superconductivity at 130K in a ceramic material two decades ago might have looked like to rational people if it came out of the blue sky with no precedent. That having been said, it is true that all the reports of superconductivity in new materials which were later confirmed to be true did conform to some minimum standard of reproducibility and data quality. I have tried to keep up with which of the reports have turned out to be true and which haven't. 
There have been two common problems: 
1. Experimental error- due, generally, to inexperienced investigators unfamiliar with measurement methods or what is required to show that a material is superconducting. This has become more rare as the field matures. 
[n.b. you really need to observe both zero resistivity and the Meissner effect].
2. "New" superconductors are claimed in chemical systems already known to have superconductors containing some subset of the components. This is common even now, and can be difficult for even experienced researchers to avoid. The previously known superconductor is present in small proportions, sometimes in lower Tc form due to impurities added by the experimentalist trying to make a new compound. In a particularly nasty variation on this, sometimes extra components not intentionally added are present - such as Al from crucibles or CO2 from exposure to air some time during the processing. I wish I had a dollar for every false report of superconductivity in a Nb containing oxide where the authors had unintentionally made NbN in small proportions.
There is also an interesting article about the Schon scandal, where Paul Grant claims
During my research career in the field of superconducting materials, I have documented many cases of an 'unidentified superconducting object' (USO), only one of which originated from an industrial laboratory, eventually landing in Physical Review Letters. But USOs have had origins in many universities and government laboratories. Given my rather strong view of the intrinsic checks and balances inherent in industrial research, the misconduct that managed to escape notice at Bell Labs is even more singular.