Consider scientific productivity as a problem in economics. One has a limited amount of resources (time, money, energy, political capital) and one wants to maximise the scientific output. Here I want to stress that the real output is scientific understanding. This is not the same as numbers of papers, grants, citations, conferences, newspaper articles, ...
The limited amount of resources is relevant at all scales: individual, research group, fields of research, departments, institutions, funding agencies, ...
As time passes one needs to face the problem of diminishing returns with increased resources. Consider the following diverse set of situations.
Adding extra parameters to a theoretical model.
Continuing to work on developing a theory without advances.
Calculating higher order corrections to a theory in the hope of getting better agreement with experiment.
Applying for an extra grant.
Taking on another student.
In quantum chemistry using a larger basis set or a higher level of theory (i.e. more sophisticated treatment of correlations).
Developing new correlation exchange functionals for density functional theory (DFT).
Trying to improve an experimental technique.
Repeating measurements or calculations in the hope of finding errors.
When one starts out it is never clear that these efforts will bear fruit. Sometimes they do. Sometimes they don't. But inevitably, I think one has to face the law of diminishing returns.
These thoughts were stimulated by two events in the last week. One was reading Not Even Wrong: The failure of String Theory and the search for unity in physical law by Peter Woit. The second was being part of a workshop on superconductivity that featured many discussions about the high-Tc cuprate superconductors.
The book chronicles how in spite of thousands of papers over the past thirty years high energy theory has not really produced any ideas beyond the standard model that are relevant to experiment, or even a theory that is coherent.
I don't think the cuprates as a field is in such a dire straight. There are real experiments and concrete theoretical calculations. But it may be debatable whether we are gaining significant new insights. This is a hard problem on which we have made some real progress, but will we make more?
Even when one is making advances one needs to consider the useful economic concept of opportunity cost: if the resources were directed elsewhere would one produce greater scientific gains? This again applies at all scales, from personal to funding agencies.
So how does one decide to move on? When is it time to quit?
I think there is a highly subjective and personal element to deciding at what point one is at the point of diminishing returns.
On also needs to be careful because there are plenty of times in the history of science where individuals perservered for many years without progress, but eventually had a breakthrough.
e.g. Watson and Crick, John Kendrew and the first protein crystal structure, theory of superconductivity, ...
I welcome suggestions.
How do you decide when you are at the point of diminishing returns?
How do you decide when a research field or topic is at that point?
Subscribe to:
Post Comments (Atom)
From Leo Szilard to the Tasmanian wilderness
Richard Flanagan is an esteemed Australian writer. My son recently gave our family a copy of Flanagan's recent book, Question 7 . It is...
-
Is it something to do with breakdown of the Born-Oppenheimer approximation? In molecular spectroscopy you occasionally hear this term thro...
-
If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new mat...
-
I welcome discussion on this point. I don't think it is as sensitive or as important a topic as the author order on papers. With rega...
I really enjoyed this post, thanks for writing it. This is part of the reason why science need to be diverse. Some people are very good at squeezing out every last bit out of a theory or experimental technique. Others will want to explore new territory that may or (more likely) may not lead to a novel way of doing things. It is a matter of taste, but there needs to be a healthy balance on both sides. Finding this balance is difficult though, as you've pointed out.
ReplyDeleteThanks for the helpful comment.
DeleteI agree this balance is important and is difficult to find. Unfortunately, I rarely find it today. On the one hand, there is a reluctance to make long term investments in difficult problems. On the other hand, there are fields which are big enough to be self sustaining but arguably should be down-sized.
Ross, I don't have a good answer to your excellent question of when to drop a research topic. I will, however, suggest an approach that is not useful: checking to see if people are still publishing papers on the topic. In my experience as a reviewer and journal editor, people keep writing papers on any given topic even when it is far into the regime of diminishing returns.
ReplyDeleteI agree. Good point.
DeleteI agree with your post, but have some comments on the "Taking another student", "Trying to improve an experimental technique", and "Repeating measurements or calculations in the hope of finding errors".
ReplyDeleteTaking another student should be seen in terms of the responsibility one has (at universities) to educate students. This does not only mean teaching courses. If the department has a large influx of graduate students, then it's the faculty's responsibility to "take another student", even if he/she would work on another aspect of the same subject (and thus not have great returns in terms of advances in science).
Regarding efforts to improve an experimental technique: I disagree. Please think back to ARPES of 20 years ago and compare that to now. I would hazard the thought that new physics is being observed. Same for interferometers (Ligo?).
In fact not trying to improve experimental techniques is almost akin to saying "physics is done". Instead, I believe that improving our eyes almost always results in (seeing) new physics.
Finally, but this is a pet-peeve of mine, repeating experiments in the hope of finding errors is in fact a useful thing. I think I know what you are trying to say - probably related to a not-so-nice definition of "idiot" that includes the word "repeating" - but let me provide this nuance.
Measuring once and getting a result does not mean the result is correct. Apart from statistical fluctuations, there could be unrecognized influences (lab temperature, humidity, light (incandescent, fluorescent, LED, glowing filament,...) that are affecting the results. So, repeating experiments, and trying to see if it results in the same outcome when !everything you chose! remains the same is very useful. If the outcome does vary, analyze whether this is in a magnitude (and frequency!) that is explainable using statistical variation, or whether there is a systematic influence you do not (yet) control.
Again, this is not criticism, just nuances to a few points you raise in a post that I generally agree with.
Thanks for the helpful comments. I agree with all of your clarifications.
DeleteI agree faculty have a responsibility to take on students if they are admitted to a program. I was looking at the issue from the point of view of a selfish faculty member who is taking on their eighth student or twentieth student (say). Their group productivity may not increase or even decrease.
I agree completely we need to keep working on improving experimental techniques and it is amazing what advances we are seeing. However, for any technique there is going to be some point at which more time, money, and effort will only produce marginal improvements. I am not an experimentalist, but I am sure there are many techniques that people don't use today because they could not be improved to be competitive.
I certainly agree that experiments need to be repeated, particularly by different groups and using different samples and apparatus. This is not done often enough. My point is just that there is some saturation point, where there is a law of diminishing returns. No one would suggest repeating 100 or may even 10 times. Again, the question is how many repetitions is enough.
When it comes to managing a lab and experiments it is worth quoting the great Max Perutz ( F Crick was his PhD student) Nobel lecture
ReplyDeletehttp://www.nobelprize.org/nobel_prizes/themes/medicine/perutz/index.html
Making People Talk and Listen to Each Other
Experience had taught me that laboratories often fail because their scientists never talk to each other. To stimulate the exchange of ideas, we built a canteen where people can chat at morning coffee, lunch and tea. It was managed for over twenty years by my wife, Gisela, who saw to it that the food was good and that it was a place where people would make friends. Scientific instruments were to be shared, rather than being jealously guarded as people's private property; this saved money and also forced people to talk to each other. When funds ran short during the building of the lab, I suggested that money could be saved by leaving all doors without locks to symbolise the absence of secrets.
Repeat these lines from above below again. This lines are repeated here just like repeating an experiment. Are these lines happening now ?
Scientific instruments were to be shared, rather than being jealously guarded as people's private property; this saved money and also forced people to talk to each other