Showing posts with label colloquium. Show all posts
Showing posts with label colloquium. Show all posts

Tuesday, October 22, 2024

Colloquium on 2024 Nobel Prizes


This friday I am giving a colloquium for the UQ Physics department.

2024 Nobel Prizes in Physics and Chemistry: from biological physics to artificial intelligence and back

The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Half of the 2024 Chemistry prize was awarded to Dennis Hassabis and John Jumper for “protein structure prediction” using artificial intelligence. I will describe the physics background needed to appreciate the significance of the awardees work. 

Hopfield proposed a simple theoretical model for how networks of neurons in a brain can store and recall memories. Hopfield drew on his background in and ideas from condensed matter physics, including the theory of spin glasses, the subject of the 2021 Physics Nobel Prize.

Hinton, a computer scientist, generalised Hopfield’s model, using ideas from statistical physics to propose a “Boltzmann machine” that used an artificial neural network to learn to identify patterns in data, by being trained on a finite set of examples. 

For fifty years scientists have struggled with the following challenge in biochemistry: given the unique sequence of amino acids that make up a particular protein can the native structure of the protein be predicted? Hassabis, a computer scientist, and Jumper, a theoretical chemist, used AI methods to solve this problem, highlighting the power of AI in scientific research. 

I will briefly consider some issues these awards raise, including the blurring of boundaries between scientific disciplines, tensions between public and corporate interests, research driven by curiosity versus technological advance, and the limits of AI in scientific research.

Here is my current draft of the slides.

Thursday, September 28, 2023

Gravitational waves and ultra-condensed matter physics

In 2016, when I saw the first results from the LIGO gravitational wave interferometer my natural caution and skepticism kicked in. They had just observed one signal in an incredibly sensitive measurement. A lot of data analysis was required to extract the signal from the background noise. That signal was then fitted the results of numerical simulations of the solutions to Einstein's gravitational field equations describing the merger of two black holes. Depending on how you count about 15 parameters are required to specify the parameters of the binary system [distance from earth, masses, relative orientations of orbits, .... The detection events involve displacement of the mirrors in the interferometer by about 30 picometres!

What on earth could go wrong?!

After all, this was only two years after the BICEP2 fiasco which claimed to have detected anisotropies in the cosmic microwave background due to gravitational waves associated with cosmic inflation. The observed signal turned out to be just cosmic dust! It led to a book, by the cosmologist Brian Keating, Losing the Nobel Prize: A Story of Cosmology, Ambition, and the Perils of Science’s Highest Honor

Well, I am happy to be wrong, if it is good for science. Now almost one hundred gravitational wave events have been observed and one event GW170817 has been correlated with an x-ray observation.

But detecting some gravitational waves is quite a long way from gravitational wave astronomy, i.e, using gravity wave detectors as a telescope, in the same sense as the regular suite of optical, radio, X-ray, ... detectors. I was also skeptical about that. But it does not seem that gravity wave detectors are providing a new window into the universe.

A few weeks ago I heard a very nice UQ colloquium by Paul Lasky, What's next in gravitational wave astronomy?

Paul gave a nice overview of the state of the field, both past and future. 

A key summary figure is below. It shows different possible futures when two neutron stars merge.

The figure is taken from the helpful review

The evolution of binary neutron star post-merger remnants: a review, Nikhil Sarin and Paul D. Lasky

A few of the things that stood out to me.

1. One stunning piece of physics is that in the black hole mergers that have been observed the combined mass of the resulting black hole is three solar masses less than the total mass of the two separate black holes. The resulting loss of mass energy (E=mc^2) of three solar masses is converted into gravitational wave energy within seconds. During this time the peak radiant power was more than fifty times the power of all the stars in the observable universe combined!

I have fundamental questions about a clear physical description of this energy conversion process. First, defining "energy" in general relativity is a vexed and unresolved question with a long history. Second, is there any sense in which needs to describe this in terms of a quantum field theory: specifically conversion of neutron matter into gravitons?

2. Probing nuclear astrophysics in neutron stars. It may be possible to test the equation of state (relation between pressure and density) of nuclear matter. This determines the Tolman–Oppenheimer–Volkoff limit; the upper bound to the mass of cold, non-rotating neutron stars. According to Sarin and Lasky

The supramassive neutron star observations again provide a tantalising way of developing our understanding of the dynamics of the nascent neutron star and the equation of state of nuclear matter (e.g., [37,121,127–131]). The procedure is straight forward: if we understand the progenitor mass distribution (which we do not), as well as the dominant spin down mechanism (we do not understand that either), and the spin-down rate/braking index (not really), then we can rearrange the set of equations governing the system’s evolution to find that the time of collapse is a function of the unknown maximum neutron star mass, which we can therefore infer. This procedure has been performed a number of times in different works, each arriving at different answers depending on the underlying assumptions at each of the step. The vanilla assumptions of dipole vacuum spin down of hadronic stars does not well fit the data [37,127], leading some authors to infer that quark stars, rather than hadronic stars, best explain the data (e.g., [129,130]), while others infer that gravitational radiation dominates the star’s angular momentum loss rather than magnetic dipole radiation (e.g [121,127]).

As the authors say, this is a "tantalising prospect" but there are many unkowns. I appreciate their honesty. 

3. Probing the phase diagram of Quantum Chromodynamics (QCD)

This is one of my favourite phase diagrams and I used to love to show it to undergraduates.


Neutron stars are close to the first-order phase transition associated with quark deconfinement.

When the neutron stars merge it may be that the phase boundary is crossed.

Tuesday, April 24, 2018

What needs to be said about mental health issues in universities?

On friday I am giving the UQ Physics Department colloquium on mental health issues for scientists. The talk may be similar to one I gave a few years ago.

I will update my talk incorporating some recent reading and the articles below.

A recent Editorial in Nature declared
Time to talk about why so many postgrads have poor mental health 
An outpouring on Twitter highlights the acute pressures on young scientists.

[I thank Tanglaw Roman for bringing the editorial to my attention. I never look at luxury journals unless someone refers me to a specific article.]

The Editorial was in response to the Twitter response to an article in a baby Nature
Evidence for a mental health crisis in graduate education

Poisonous science: the dark side of the lab 
The bullying and subsequent suicide of a talented Ivy League scientist exposes ugly truths about the cruelty and dysfunction at the heart of academic science

Mindfulness won't fix bad management
It also conveniently shifts the burden of wellbeing from the employer causing stress to the employee trying to deal with it. Worse, it allows what you might call "well-washing": employers who cloak themselves in a veneer of caring for their workers while hurting them with bad management practices. 
Five tips to get a good nights sleep

However, I would like some feedback and suggestions from readers.

What do you think needs to be said?

Update. The colloquium was postponed to avoid a scheduling conflict and to make it accessible to a broader audience. Thus, there is still time to send in your suggestions.

Thursday, July 6, 2017

Are theoretical physics and chemistry amenable to online collaboration?

Last week at UQ we had a very nice mathematics colloquium, "Crowdsourcing mathematics problems" by Timothy Gowers.
He first talked about the Polymath project, including its successes and marginal contributions.
He then talked about a specific example of a project currently underway on his own blog, concerning transitive dice. This was pretty accessible to the general audience.

This is where a well defined important problem is defined on a blog and then anyone is free to contribute to the discussion and solution. A strength of this approach is that it makes use of the complementary strengths, experience, and expertise of the participants. Specifically, solving problems includes:
  • selecting a problem that is important, interesting, and potentially ripe for solution
  • defining the problem clearly
  • breaking the problem down into smaller parts including conjectures
  • sketching a possible heuristic argument for the truth of the conjecture
  • giving a rigorous proof of the conjecture
  • finding possible counter-examples to the conjecture
  • connecting the problem to other areas of mathematics
This can be efficient because dead ends and mistakes are found quicker than someone working in isolation. 
People are more motivated and engaged because they are excited to be working on something bigger than themselves and what they might normally tackle. And they enjoy the community.
What about assigning credit in such group work? There is a clear public record of who has contributed what. Obviously, this does not work for bean counters looking at traditional metrics.
This approach mostly attracts senior people who are secure in themselves and their career stage and more interested in solving problems than getting individual credit.

The cultural differences of pure mathematics and physics was striking. The talk was given on whiteboards and blackboards without notes. No powerpoint! The choice of research problems was purely based on curiousity, not any potential practical value or the latest fashion. It is fascinating and encouraging that the pure mathematics community is still "old school" with the focus on quality not quantity.

Aside: Gowers is also well known for initiating a boycott of Elsevier journals.

Now, my question. 
What is stopping theoretical chemistry and physics from such a "crowd sourcing" approach? 
Is it that the problems are not amenable? 
Or is it largely that we are too driven by a system that is fixated on individual credit?

Monday, March 7, 2016

Am I too black and white?

I don't like people who are very black and white about things. Life, science, teaching, and politics are complicated. There are shades of grey. Simplistic analysis leads to simplistic "solutions". Hopefully I often bring that out in this blog, in posts such as a political metaphor for the correlated electron community.

Yet there is an issue where I am quite black and white and adverse to technicolour solutions (the rainbow coalition?!): power point slides.
Recently I sat through a talk where the speakers slides had every dot point in a different colour.
I really find this hard to read and distracting. 
Yet this is not unusual.

What is wrong with plain old black and white and traditional fonts?
Below is a random choice of one of my slides. It is not very creative or glamorous but it is easy on the eyes and brain.
Am I alone in my aversion to fancy colours, fonts, and backgrounds?

Friday, November 27, 2015

I believe in irreproducible results

At UQ we just had an interesting colloquium from Signe Riemer-Sorensen about Dark matter emission - seeing the invisible. Central to the talk was the data below. Focus on the red data around 3.6 keV.


This has stimulated more than 100 theory papers!
This reminds me of the faster than speed of light neutrinos and the 17 keV neutrino, 500 GeV particles seen by the Fermi gamma ray telescope, BICEP2 "evidence" for cosmic inflation, ....

The above data is discussed in detail here.

I don't want to just pick on my astrophysics and high energy physics colleagues as this happens in condensed matter and chemistry too... remember cold fusion... think about periodic reports of room temperature superconductors!

The painful reality is that cutting edge science is hard. One can be incredibly careful about noise, subtracting background signals, statistical analysis, sample preparation, .... but in the end there is Murphy's law .... things do go wrong .... and crap happens...

Skepticism and caution should always be the default reaction; all the more so the greater the possible significance or surprise of the "observed" result.

I believe in irreproducible results.

Update (14 December).
Clifford Taubes brought to my attention two relevant papers on the possible 3.5 keV line. The first paper rules out a dark matter origin of the line and even mentions Occam's razor. The second has a mundane alternative explanation of the line in terms of charge exchange between hydrogen gas and sulfur ions.

Friday, March 20, 2015

Physicists are bosons; mathematicians are fermions

The first observation is that each mathematician is a special case, and in general mathematicians tend to behave like “fermions” i.e. avoid working in areas which are too trendy whereas physicists behave a lot more like “bosons” which coalesce in large packs and are often “over-selling” their doings, an attitude which mathematicians despise.
Alain Connes, Advice to beginning mathematicians

I learnt this quote today, courtesy of Elena Ostrovskaya, who gave todays Physics Colloquium at UQ.

Friday, May 9, 2014

Colloquium on Emergent states of quantum matter

Here are the slides for the talk I am giving today at the UQ Physics colloquium.
I will show the video Quantum levitation, and discuss what is and isn't quantum about it.

A good discussion of some of the issues raised is Laughlin and Pines article The Theory of Everything. A more extensive and introductory discussion by Pines is at Physics for the 21st Century.


Postscript.
Based on comments and questions afterwards, particularly from some undergraduates, there are few things I would do slightly differently.

I should have said what a Hamiltonian is: a function that defines the energy as a function of the system variables, e.g., the position and velocity of all of the particles.

The stratification of reality shown by my boxes is a simplification for schematic purposes. There is no clearly defined boundary between strata. For example, at the boundary between chemistry and physics one has chemical physics and physical chemistry. The boundary between biology and biochemistry is blurred. On the other hand, anatomy is qualitatively different to enzyme mechanisms. Acid-base equilibria is chemistry not physics.

Ben Powell emphasized to me that the claim that "superconductors exhibit broken U(1) gauge symmetry" is problematic and subtle. There is a long detailed paper, Superconductors are topologically ordered that I have read several times but don't really understand.

Wednesday, April 30, 2014

Draft of Colloquium on Emergent States of Quantum Matter

Next week I am giving the Physics Department Colloquium at UQ. I am working hard at trying to follow David Mermin's advice, and make it appropriately basic and interesting. I am tired of listening too many colloquia that are more like specialist research seminars.

I would welcome any feedback on what I have done so far. Here is the draft of the abstract.
Emergent states of quantum matter 
When a system is composed of many interacting components new properties can emerge that are qualitatively different from the properties of the individual components. Such emergent phenomena leads to a stratification of reality and of scientific disciplines.
Emergence can be particularly striking and challenging to understand for quantum matter, which is composed of macroscopic numbers of particles that obey quantum statistics. Examples included superfluidity, superconductivity, and the fractional quantum Hall effect. I will introduce some of the organising principles for describing
such phenomena: quasi-particles, spontaneously broken symmetry, and effective Hamiltonians. I will briefly describe how these ideas undergird some of my own research on complex molecular materials such as organic charge transfer salts, fluorescent proteins, and hydrogen bonded complexes. The interplay of emergence and reductionism raises issues in philosophy and as to the best scientific strategy for describing complex systems.

Here is a very preliminary version of the slides. [Latest version is here].

Let me know of any ways to make any of this clearer and more interesting.

Wednesday, June 5, 2013

The key ingredient of a good colloquium?

I still remember the main idea of David Mermin's What is wrong with those talks?
It was published as a Reference Frame in Physics Today in 1992. Mermin says:
Your only goal must be to furnish ordinary physicists with some modest glimpse of what sustains your own interest in your subject.
This past week this idea really shaped the preparation of my quantum science seminar. My main goal was to give the context for my latest paper, rather than talk about the contents of the paper.

I think Mermin's comments are particularly pertinent to colloquiums. But I feel he is a bit too pessimistic about seminars and conference talks for experts.

If you read the article, let me know if you think Mermin is too pessimistic? Or is he realistic? Has Powerpoint made the problem greater or less than 20 years ago?

Thursday, April 11, 2013

Learning how bees navigate and make smooth landings

Last friday I attended a very nice physics colloquium by Mandyam Srinivasan on "Honeybees as a Model for the Study of Visually Guided Flight, Navigation, and Biologically Inspired Robotics"

Here are two fascinating things that really stood out to me. The picture below [taken from this review] gives a schematic of the experiment showing that image flow is the key property that bees balance to navigate obstacles.
Building on this led to insights as to how bees make smooth landings.
Analysis of the landing trajectories revealed that the flight speed of the bee decreases steadily as she approaches the surface. In fact, both the forward speed and the descent speed are approximately proportional to the height above the surface (Fig. 8), indicating that the bee is holding the angular velocity of the image of the surface approximately constant as the surface is approached. This strategy automatically ensures that both the forward and the descent speeds are close to zero at touchdown. Thus a smooth landing is achieved by an elegant and surprisingly simple process that does not require explicit knowledge of the bee's instantaneous speed or height (250).
This analysis leads to two linear first order differential equations which can be solved to make predictions that the bee regulates its height to decrease exponentially with time. This is indeed observed to the case. [A colleague commented how the agreement between experiment and theory was much more impressive than the average biophysics research].

Srinivasan then went on to describe how some of these ideas are being implemented in algorithms for automated flight.
He stressed that his view was one should not follow a biomimetic strategy but rather a bio-princip one, i.e. finding what principles are used in nature [e.g. constant image flow] and using appropriately adapted implementations in artificial flight.

Saturday, September 8, 2012

Is this test of the Standard Model impressive?

A week ago we had an interesting physics colloquium The Higgs Boson at the Large Hadron Collider by Elizabetta Barberio, who works in the ATLAS detector collaboration.

It was nice to hear a talk about the Higgs boson which simply discussed the physics and the actual experimental results, without any hype.

To me one of the most interesting and impressive figures shown in the talk actually had nothing to do with the Higgs! I found it here on the CERN website.

It shows the measured cross sections for the production of different particle products (horizontal scale).
Note the vertical scale varies by four orders of magnitude.
Basically, it shows that the results from the ATLAS detector are consistent with the Standard Model.

One thing I would like to know is how many independent parameters (coupling constants) are involved in determining these cross sections from the Standard Model?
Is the agreement between experiment and theory impressive? Or are there so many parameters in the Standard Model that this plot is really just saying what those parameters are? But, all the cross sections can't be independent of one another.

Thursday, June 2, 2011

Colloquium on quantum biology

Here are my slides for the UQ Physics Colloquium. I have tried to build it around 5 key ideas:

1. Truly quantum dynamics requires phase coherence.

2. We can quantify quantum decoherence of excited states of optically active biomolecules. (Decoherence arises due to dielectric relaxation of the surrounding protein and water.)

3. Quantum dynamics is determined by competition between system timescale [which creates superposition state] and time scales of the environment.

4. There are significant scientific reasons to be skeptical about these claims of “quantum biology”.

5. The real scientific challenges for understanding are defining and solving realistic effective Hamiltonians for specific functional processes.

The key reference is a 2008 Review article I wrote with Joel Gilmore.

Overall I feel there is too much material, some of it is too technical, ...
I welcome feedback.

Saturday, May 28, 2011

I cannot deny this

This week the UQ Physics Colloquium was given by John Cook on Communicating Climate Science and Countering Disinformation. He is a UQ physics graduate and now writes an influential blog Skeptical Science which aims to present peer-reviewed climate science in an accessible fashion that answers climate change skeptics. He has also developed a popular Phone application which answers 10 common used arguments of climate change skeptics. He recently published a book Climate Change Denial.

Here is my summary of some of the main points.

97 out 100 climate scientists believe that humans are causing carbon dioxide levels to rise.
Why?  There are many different lines of evidence.
In contrast, only 58% of the general public believe this. This is because the mainstream media gives the impression of a 50/50 debate.

Several references were made to a book Merchants of Doubt"  by Naomi Oreskes and Erik Conway which documents how vested financial interests have funded disinformation campaigns to undermine public debate about issues on which the scientific evidence is clear.

John summarised a paper Denialism: what is it and how should scientists respond?
which argued that there are 5 characteristics common to other denials as well. e.g., dangers of cigarette smoking, HIV causes AIDS, young earth creationism,
[I think this also applies to some proponents of quantum biology!]

1. Cherry picking of data
e.g. glacier mass balance. there are a few glaciers that are indeed growing. But the vast majority are shrinking.
Human carbon dioxide emission is only a small fraction of total emission from the planet. This ignores that natural processes balance out absorption and emission.

2. Promoting the views of fake experts
e.g. The petition project - 31,000 "scientists" have signed it.

3. Impossible expectations
Always demanding more evidence and complete certainty.

4. Logical fallacies
Climate changed has happened before

5. Conspiracy theories
e.g. climategate. But, 8 independent investigations have found no evidence of conspiracy

In contrast, to denialism genuine skepticism considers all the evidence and weighs it. I would add a sixth common feature of denialsm: a lack of humility to acknowledge that their lack of relevant scientific expertise, experience, and knowledge may just possibly mean that their opinion is not valid.

John answered one question I have had for a while. Why do weather fluctuations increase with increasing global temperatures?
This is due to the water cycle because higher temperatures lead to more evaporation, more drought, more water in atmosphere.

There is one complex and subtle issue which was not addressed and I do not understand. That is the views of and role played by distinguished physicists such as Freeman Dyson, Bob Austin, William Happer, and Bob Laughlin. They are not climate scientists but on some level are "skeptics". Indeed, some of them unsuccessfully petitioned the American Physical Society to change its policy on climate change. Why do they believe what they do? Perhaps it is just a mixture of physics hubris and political sympathies...

Overall, I thought this was a great colloquium. It generated a lot of good questions and discussions.

Monday, May 23, 2011

Writing effective talk abstracts

This is not easy. It is worth the effort. You want to convince people to attend your talk: it will be interesting, important and understandable.

Things to do:
  • tailor your abstract to your audience (e.g. their backgrounds and interests)
  • explain why the topic is interesting
  • use words sparingly
  • find a balance between being generic and too technical
  • include some key scientific ideas you will discuss
  • include a brief statement of your main results
  • include a few relevant references the really keen may want to look at
  • re-write it several times (especially if you are in-experienced)
Things to avoid
  • superfluous phrases such as "In this talk I will..." and "I will end with some conclusions".
  • using acronyms: nLAs, DFT, LDA+DMFT, NMR, ENDOR etc. 
  • hype

But, it is always easier to tell people what to do rather than actually do it yourself! So I offer up for critique my latest abstract: for the UQ Physics Colloquium next friday.

Quantifying the limited role of quantum dynamics in biomolecular function

Quantum effects such as coherence, interference, tunnelling, and entanglement are well established at the level of atoms and small molecules in a vacuum. However, could quantum "weirdness" occur in large biomolecules which are in thermal equilibrium with water at room temperature? I will show how it is possible to give a quantitative description of quantum decoherence for the excited states of optically active biomolecules in a native environment [1].
A key to a quantum description of biomolecules are the multiple time, energy, and length
scales associated with the dielectric relaxation of proteins and water. In most cases appears that quantum coherence requires time scales less than 1 picosecond and distances less than 10 nm.
This work provides a framework to critical evaluate the claims of some physicists (and New Age pop psychology gurus) that quantum effects are important in biology. Such speculations have increased in the past few years stimulated by some experimental results concerning photosynthetic systems.
I contend that most claims of "quantum biology" are based on wishful thinking and involve
(i) debatable data analysis with an excessive reliance on curve fitting, 
(ii) a misunderstanding of what biological evolution implies about efficiency, and/or 
(iii) speculations which go far beyond what the experimental data may imply.
Nevertheless, there is a lot of interesting physics involved in understanding the role of quantum decoherence in biomolecular function at sub-nanosecond timescales.
One key challenge is for specific class of processes defining an effective Hamiltonian. This must find a balance between the level of detail normally used in physics, chemistry, and biology. I will briefly illustrate this with recent work on Green Fluorescent Proteins [2]. A second challenge is solving the quantum dynamics in realistic parameter regimes where there is no simple separation of time scales. 

[1] J. Gilmore and R.H. McKenzie, J. Phys. Chem. A 112, 2162 (2008).
[2] S. Olsen and R.H. McKenzie, J. Chem. Phys. 130, 184302 (2009).

Wednesday, May 11, 2011

A string of anecdotes does not make an argument

Previously I have posted that one key to giving a good talk is to never offer undefendable ground. i.e., don't make dubious claims that will distract your audience from your main point and undermine your credibility.

On Friday at UQ, David Jamieson, Head of the School of Physics at the University of Melbourne gave a colloquium Physics, Power, and Climate change.

I found the colloquium rather disappointing and frustrating because he made a number of debatable claims. But, perhaps I mis-heard or mis-understood him. I welcome others to clarify or correct me.

1. He began by briefly promoting his own research saying "it is difficult to imagine future technologies if you are not working in our centre", referring to the ARC Centre for Quantum Computing and Communication Technologies, which works on the Quantum Internet.

2. Chemists may find  strange the claim that quantum computing was essential to understanding how caffeine works.

3. People in the majority world need access to electricity so they can get on Facebook!
Actually, I thought more people in India have access to mobile phones than to clean water!

4. It was claimed that arguments for the widescale adoption of solar power were flawed "because they ignored the second law of  thermodynamics", i.e. the problem of the low efficiency of solar cells. I felt this was a cheap shot since I have never heard anyone who advocates photovoltaic cells claim they were anything more than 10-20% efficient. The internal combustion energy and coal-powered stations also suffer from the second law. I believe a Carnot efficiency of the latter is about 20-40%, but we are quite comfortable with that.

The climax of the talk was the idea that power from nuclear fission is the only viable option for responding to climate change.

5. He stressed how dangerous hydroelectric power, citing an accident in Russia which killed 75 people.This was compared to the Three Mile Island nuclear accident which killed no-one. Furthermore, it was claimed that since then there are been no major nuclear accidents since then.

The colloquium normally ends by 5pm but Jamieson was still speaking at 5:20pm and so I left. [Never go overtime. To me it just communicates either arrogance, rudeness, or dis-organisation]. But, perhaps I missed some important qualifiers in the conclusions and question time.

He began to discuss pathologies of argument concerning climate change and
alternative energy. It seemed to me Jamieson himself adopted some of these same pathologies himself. e.g., arguing by anecdote, by analogy, gaining emotional sympathy by tapping into peoples frustration about biased and unbalanced media coverage,...

The colloquium was entertaining and thought provoking. I learnt a few things I did not know. But, to me the debatable claims above created too many credibility problems for me to be convinced of the conclusions.

Some might claim that I did not like the colloquium because I did not like its conclusions. However, I am actually quite open to the possibly painful conclusion that the future energy mix must include a nuclear fission component.

Much better talks on this subject are those given by David MacKay (Cambridge) and by Nate Lewis (Caltech) which you can watch online. To me they are thorough, balanced, and scholarly.

This friday Paul Meredith from UQ will present a colloquium Sun, Power, and Energy: Opportunities and Perspectives for a Solar Powered Australia. Unfortunately, I will miss it as I will be in Sydney.

What is condensed matter physics?

 Every day we encounter a diversity of materials: liquids, glass, ceramics, metals, crystals, magnets, plastics, semiconductors, foams, … Th...