Showing posts with label fraud. Show all posts
Showing posts with label fraud. Show all posts

Friday, June 28, 2019

The bloody delusions of silicon valley medicine

On a recent flight, I watched the HBO documentary The Inventor: Out for Blood in Silicon Valley. It chronicles the dramatic rise and fall of Elizabeth Holmes, founder of a start-up, Theranos, that claimed to have revolutionised blood testing.



There is a good article in the New Republic
What the Theranos Documentary Misses
Instead of examining Elizabeth Holmes’s personality, look at the people and systems that aided the company’s rise.

In spite of the weaknesses described in that article, the documentary made me think about a range of issues at the interface of science, technology, philosophy, and social justice.

The story underscores Kauzmann's maxim, ``people will often believe what they want to believe rather than what the evidence before them suggests they should believe.''

Truth matters. Eventually, we all bounce up against reality: scientific, technological, economic, legal, ...  It does not matter how much hype and BS one can get away, eventually, it will all come crashing down. It is just amazing that some people seem to get away with it for so long...
This is why transparency is so important. A bane of modern life is the proliferation of Non-Disclosure Agreements. Although, I concede they have a limited role is certain commercial situations, they seem to be now used to avoid transparency and accountability for all sorts of dubious practises in diverse social contexts.

The transition from scientific knowledge to a new technology is far from simple. A new commercial device needs to be scalable, reliable, affordable, and safe. For medicine, the bar is a lot higher than a phone app! 

Theranos had a board featuring ``big'' names in politics, business, and military, such as Henry Kissinger, George Shulz, Daniel Mattis,.. All these old men were besotted with Holmes and more than happy to take large commissions for sitting on the board. Chemistry, engineering, and medical expertise were sorely lacking. However, even the old man with relevant knowledge Channing Robertson was a true believer until the very end.

Holmes styled herself on Steve Jobs and many wanted to believe that she would revolutionise blood testing. However, the analogy is flawed. Jobs basically took existing robust technology and repackaged and marketed it in clever ways. Holmes claimed to have invented a totally new technology. What she was trying to do was a bit like trying to build a Macintosh computer in the 1960s.

Friday, January 26, 2018

A spicy scientific scandal

I am often on the lookout for interesting molecules and solids which involve short hydrogen bonds, particularly biomolecules where this bond may play a key role in functionality. Such bonds are of interest from a physics point of view because then the quantum motion of the proton matters.
Consequently, the following paper (published in October 2016) caught my attention.

Proton Probability Distribution in the O···H···O Low-Barrier Hydrogen Bond: A Combined Solid-State NMR and Quantum Chemical Computational Study of Dibenzoylmethane and Curcumin Xianqi Kong, Andreas Brinkmann Victor Terskikh, Roderick E. Wasylishen, Guy M. Bernard∥, Zhuang Duan∥, Qichao Wu∥, and Gang Wu


The authors state their motivation.
Curcumin was selected in our study, in part because it is being touted as a wonder drug and is of intense interest to the pharmaceutical and medical community.31−33
This sounds quite exciting. Could low barrier hydrogen bonds be important in curing cancer?
Curcumin is a major ingredient of tumeric, the yellow spice, which features heavily in Asian cooking.
This got me Googling and it turns out the claims of a "wonder drug" are dubious.

Experimental studies of curcumin turn out to be particularly problematic, as explained in a blog post
Curcumin will waste your time by Derek Lowe. It is worth reading because it highlights the need for replication studies and publication of null results.

But it gets worse. References 31 and 32 have the same last author, Bharat Aggarwal, who it turns out has been the major proponent of the "wonder drug". In 2015 he "retired" from the University of Texas, following allegations of scientific fraud. By August 2106, eighteen published papers by him had been withdrawn.

To illustrate the problem of metrics, in 2016 Aggarwal had an h-index of 160, and in 2015, Thomson Reuters (ISI Web of Science) listed him among the World's Most Influential Scientific Minds.

I should stress that none of this invalidates the results of the hydrogen bonding paper that got me on this trail.

Tuesday, July 29, 2014

Journals should publicise their retraction index

Here are several interesting and related things about retracted journal articles.

1. Some retracted articles continue to get cited!
For example, today I found an interesting reference to this Science paper from 2001, only to learn it had been retracted. Furthermore, Google Scholar shows the paper has been cited several times in the past 4 years. Indeed, some of the Schon-Batlogg papers are still cited, for scientific reasons, not just as examples of scientific fraud. (For example, this recent JACS).

2. The Careers section of Nature has an interesting article Retractions: A Clean Slate, which makes the case that if you make an "innocent" mistake the best thing you can do is promptly make a retraction. But, there are some pitfalls. One thing that is still not clear to me is how in some cases one decides between complete retraction, partial retraction, and an erratum.

3. There is a correlation between journal impact factor and the frequency of retractions.
Somehow I did not find the graph below surprising.
This is described in an interesting editorial Retracted Science and the Retraction Index in the journal Infection and Immunity.
We defined a “retraction index” for each journal as the number of retractions in the time interval from 2001 to 2010, multiplied by 1,000, and divided by the number of published articles with abstracts. 
 ... the disproportionally high payoff associated with publishing in higher-impact journals could encourage risk-taking behavior by authors in study design, data presentation, data analysis, and interpretation that subsequently leads to the retraction of the work. 
Another possibility is that the desire of high-impact journals for clear and definitive reports may encourage authors to manipulate their data to meet this expectation. In contradistinction to the crisp, orderly results of a typical manuscript in a high-impact journal, the reality of everyday science is often a messy affair littered with nonreproducible experiments, outlier data points, unexplained results, and observations that fail to fit into a neat story. In such situations, desperate authors may be enticed to take short cuts, withhold data from the review process, overinterpret results, manipulate images, and engage in behavior ranging from questionable practices to outright fraud (26). 
Alternatively, publications in high-impact journals have increased visibility and may accordingly attract greater scrutiny that results in the discovery of problems eventually leading to retraction. It is possible that each of these explanations contributes to the correlation between retraction index and impact factor.
I look forward to the day when journals publicise their retraction index and university managers discourage their staff from publishing in certain journals because their retraction index is too high.

Friday, March 7, 2014

Scientific fraud on prime time TV

I few times I have posted scenes from the TV show, The Big Bang Theory, that involved actual science such as topological insulators, spin ice, graphene, debunking quantum biology, or the Born-Oppenheimer approximation.

Unfortunately, I think the past few years the show has degenerated into the typical Hollywood sitcom, focusing on "who is dating who now", titillation, and inane crude humour.

However, I saw a great recent episode where Sheldon proposes the existence of a new superheavy element which is subsequently "discovered" by a Chinese research group. It turns out he made simple error in the units he used in his calculations and the Chinese group fabricated their results...



This is actually reminiscent of a real fraud committed by at Berkeley and Darmstadt by Viktor Ninov who fabricated data and claimed the discovery of new elements.

A recent case of scientific fraud at UQ made it onto the local TV news. Unfortunately, the video date has expired. I thought it was pretty interesting when I first saw it. It is also interesting that the university seems to have wiped out the electronic history at the university of the researchers; one was a Head of School for a decade.

Monday, March 3, 2014

Is it possible to publish gibberish in a peer-reviewed journal?

Unfortunately, I would say yes. I find it discouraging that some papers contain rambling speculative sections, particularly about connecting theory and experiment,

However, what I did not know until today is that it is even possible to publish literal computer generated gibberish. This was described in Nature last week

Publishers withdraw more than 120 gibberish papers
"Conference proceedings removed from subscription databases after scientist reveals that they were computer-generated."

The relevant papers were not just in some unheard of "spam" "journal" or "conference", but ones that were published by Springer and IEEE.

The fraud was exposed by a computer scientist, Cyril Labbe.
Labbé is no stranger to fake studies. In April 2010, he used SCIgen to generate 102 fake papers by a fictional author called Ike Antkare. Labbé showed how easy it was to add these fake papers to the Google Scholar database, boosting Ike Antkare’s h-index, a measure of published output, to 94 — at the time, making Antkare the world's 21st most highly cited scientist. Last year, researchers at the University of Granada, Spain, added to Labbé’s work, boosting their own citation scores in Google Scholar by uploading six fake papers with long lists to their own previous work.
My thanks to Anthony Jacko and Ben Powell for bringing this to my attention.

Tuesday, November 5, 2013

Bad taste and the sins of academia

There is a very thoughtful article in Angewandte Chemie
The Seven Sins in Academic Behavior in the Natural Sciences
by Wilfred F. van Gunsteren

It is worth reading slowly and in full. He highlights the negative influence of "high impact" journals and discusses many of the same issues as the recent cover story in the Economist. He has some nice examples of each of seven sins.

But, there was one paragraph that really stood out.
Administrative officials at universities and other academic institutions should refrain from issuing detailed regulations that may stifle the creativity and adventurism on which research depends. They should rather foster discussion about basic principles and appropriate behavior, and judge their staff and applicants for jobs based on their curiosity-driven urge to do research, understand, and share their knowledge rather than on superficial aspects of academic research such as counting papers or citations or considering a person’s grant income or h-index or whatever ranking, which generally only reflect quantity and barely quality. If the curriculum vitae of an applicant lists the number of citations or an h-index value or the amount of grant money gathered, one should regard this as a sign of superficiality and misunderstanding of the academic research endeavor, a basic flaw in academic attitude, or at best as a sign of bad taste.
Wow! This is so unlike the standard (and unquestioned) mode of operation in Australia and many other countries.

I guess ETH-Zurich [where van Guntersen just retired from] operates in a different manner. Previously, I posted about the criteria that Stanford uses for tenure. So universities that want to be "world class" might want to "follow best practise".

Saturday, April 27, 2013

When a Dean fakes data

The Sunday New York Times magazine has a fascinating and disturbing article The Mind of a Con Man about Diederik Stapel, former Dean of Behavioural and Social Sciences, at Tilburg University in the Netherlands. He had a stellar academic career which was based on fabricating experimental data.

The article is rather long but worth reading. Here are a few of the extracts I found particularly pertinent:
Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. 
In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. 
What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman. I am on the road. People are on the road with their talk. With the same talk. It’s like a circus.”  
Stapel’s atypical practice of collecting data for his graduate students wasn’t questioned,  [How many Deans do that ?] 
[The official report from the University stated] The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.
It may be tempting for physicists and chemists to look down our noses at the social scientists, but I think these issues are just as pertinent for us. Don't forget Hendrik Schon!

As Kauzmann said: we tend to believe what we want rather than what the data tells us we should believe. Often the data is messy and inconclusive.

Wednesday, April 10, 2013

The dark side of open access

The last post was about spam.
Unfortunately, this one is too.
It turns out there is a dark side to open access journals.
This is covered in a fascinating and disturbing New York Times article Scientific articles accepted (Personal checks too).
Think twice before you receive an invitation to submit an article, speak at a conference, or serve on an editorial board, from an organisation you have not had prior dealings with.
I get a lot of this spam and it almost all automatically goes to my Junk mail folder. But, I did not realise just how bad the problem is, particularly for those who are duped.

Thursday, November 29, 2012

Impact factors have no impact on me

There seems to be a common view that on CVs (and grant applications) people should list the Impact Factors for each journal in which they have a paper.
To me this "information" is just noise and clutter.
I do not include it in my own CV or grant applications.
Why?

1. IFs just encode something I know already.
Nature > Science > PRL ~ JACS > Phys. Rev B ~ J. Chem. Phys. > Physica B ~ Int. J. Mod. Phys. B > Proceedings of the Royal Society of Queensland .....

2. There is a large random element in success or failure to get an individual paper published in a high profile journal. e.g., who the referees are.

3. The average citations of a journal is not a good measure of the significance of a specific paper. There is a large variance. What really matters is how much YOUR/MY specific paper in that journal is cited in the long term. Unfortunately, in most cases it is hard to know in less than 3-5 years.

4. Crap papers can get published in Nature and Science. Hendrik Schon published almost 20 papers in Nature and Science. On the other hand, Nobel Prize winning papers are sometimes published in Phys. Rev. B (e.g. giant magnetoresistance).

5. I don't need to know the actual IF of a journal with an impact factor of one or less in order to know that it is a rubbish journal. I already know that because I virtually never read papers in such journals simply because they virtually never contain anything that is significant, interesting, or valid. My "random" meanderings through the literature virtually never lead me there.

6. I remain to be convinced that reporting IFs to more than 2 significant figures and without error bars is meaningful.

I fail to see that alternative metrics such as the Eigenfactor resolve the above objections.

The only value I see in IFs is helping librarians compile draft lists of journals to cancel subscriptions to in order to save money.

I am skeptical that IFs are useful for comparing the research performance of people in different fields (e.g. biology vs. civil engineering vs. psychology vs. chemistry).

And in the end... what really matters is whether the paper contains interesting, significant, and valid results... Actually looking as some of an applicant's papers and critically evaluating them is the best "metric". But that requires effort and thought...

Thursday, July 26, 2012

A sad tale of publishing gone mad

I found this sad story interesting and disturbing because of what it reveals about journal impact factors, university rankings, self-citations, Elsevier, scientific crack pots, lawsuits....
More of the weird history is here and here.

Saturday, March 31, 2012

Whose fault is plagiarism?

The controversy about the plagiarised Ph.D of the President of Hungary, Pal Schmitt, is making for "interesting" reading. In 1992 he received a Ph.D for a 200 page thesis that contains 17 pages directly translated from a German book. The rest seems to largely be a translation of work by a Bulgarian sports writer. A committee from the university reviewed the case and wrote a 1100+ page report (!) and concluded that he should keep his degree. The supervisors and examiners were to blame! However, following widespread criticism, the university just announced that they would revoke the degree.

Following the resignation of the German Defence minister for another plagiarised doctorate it seems that the academic backgrounds of prominent politicians are getting more scrutiny. This raises an interesting question. Which of the following is more likely to be true?
  • there has been a lot of plagiarism in the social sciences and humanities but it is only being detected in the case of these politicians because of the increased scrutiny
  • leading politicians are often ambitious individuals who "cut corners" and so are more prone to commit plagiarism
Unfortunately, the leadership of Australian universities is not immune from this problem. A decade ago the Vice Chancellor of Monash University, David Robinson, was forced to resign because of plagiarism.

Wednesday, November 9, 2011

When the data is "too good to be true"

Remember Hendrik Schon! A decade ago he published a string of very impressive Nature and Science papers that eventually turned out to be "too good to be true". It seems a similar thing has been happening in the field of social psychology. The AP reports
 three graduate students grew suspicious of the data Stapel had supplied them without allowing them to participate in the actual research. When they ran statistical tests on it themselves they found it too perfect to be true and went to the university's dean with their suspicions.
In the future, the university plans to require raw data from studies to be preserved and made available to other researchers on request - a practice already common in most disciplines.
Nature News reports
The commission found that co-authors of Stapel's papers seem to have been unaware of the fraud, naively trusting in Stapel's reputation and fooled by elaborate preparations for tests that were never actually carried out..... Stapel and a colleague or student came up with a hypothesis, and then designed an experiment to test it. Stapel took responsibility for collecting data through what he said was a network of contacts at other institutions, and several weeks later produced a fictitious data file for his colleague to write up into a paper. On other occasions, Stapel received co-authorship after producing data he claimed to have collected previously that exactly matched the needs of a colleague working on a particular study.....
The data were also suspicious, the report says: effects were large; missing data and outliers were rare; and hypotheses were rarely refuted. Journals publishing Stapel's papers did not question the omission of details about where the data came from. 
This is part of a Nature News piece which has the misleading title "Report finds massive fraud at Dutch universities". A more responsible and accurate title would be "Report finds massive fraud by one Dutch professor of social psychology". In  the comments section several Dutch researchers rightly object to the title.

Saturday, October 22, 2011

Should you use Turnitin?

Turnitin is commercial software that detects student plagiarism by comparing submitted assignments to everything on the web and a vast database of other assignments.

A few years ago I would have thought this was might be relevant to people teaching large courses of undergraduates in the humanities. However, I was wrong. Unfortunately, experience has shown that plagiarism does occur, even in physics courses, and at the graduate level. There are cases of students submitting Ph.D proposals and literature reviews that involve cutting and pasting text from papers and the internet. Although certainly not confined to them this can be more of a problem with students from non-Western countries. There do seem to be some "cultural" differences as to what is acceptable practice and what is not. This does not excuse it, but does mean that sometimes first-time offenders need to be gently cautioned and educated.

A range of offences can occur, ranging from sloppy referencing to blatantly copying large swathes of text and presenting them as ones own.

So if you don't use Turnitin (or something equivalent) try it. You won't know if there really is a problem until you check.

If you do detect plagiarism make sure you report it to the relevant academic authority. Do not just give the student a private warning. It is important that someone is keeping track.  Otherwise repeat offenders may not really understand the severity of their offence and get appropriately disciplined.

Another issue, which is harder to detect, is that of ghost writing.

Friday, January 14, 2011

The flood at UQ


The flood waters are beginning to subside in Brisbane. Above is a picture of the northside of the UQ campus. Fortunately, no major buildings were affected; mostly sporting facilities and car parks were inundated.
Thanks again to colleagues from around the world who enquired about my well being. My house was fine. Mind you, the water got to within 100 metres. The route I normally walk with my son to school was underwater.

It is disturbing to read in the Australian, Alarming report on Brisbane River risks covered up, about how a 1999 report to the city council was kept secret until it was leaked years later. Why the secrecy? Too many developers and real estate agents wanted to make money by building and selling on low-lying land.....

Any lessons for scientists? Yes. I believe that when there are large amounts of money (funding) and power (prizes and careers) at stake it is hard for the potential beneficiaries to be objective about the truth. They will also be reluctant to want all the uncertainties about what is known or not known to come to light.

Sunday, December 19, 2010

Seeing what you want to see

There is a good Opinion piece in the November Scientific American, Fudge Factor: a look at Harvard science fraud case by Scott O. Lillienfeld. He discusses the problem of distinguishing intentional scientific fraud from confirmation bias, the tendency we have as scientists to selectively interpret data in order to confirm our own theories.

This is a good reminder that the easiest person to fool is yourself.

Monday, September 20, 2010

Who should be a co-author?

Everyone should make sure they are familiar with the American Physical Society's guidelines:
Authorship should be limited to those who have made a significant contribution to the concept, design, execution or interpretation of the research study. All those who have made significant contributions should be offered the opportunity to be listed as authors. Other individuals who have contributed to the study should be acknowledged, but not identified as authors. 
Some reasons why someone should not be a co-author:
  • All they did was obtain the funding for the project.
  • Their only contribution is that they are the official supervisor of a student who is a co-author.
  • They have not read the paper!
  • Their only contribution is that their status in the field may help get the paper published.
  • They are not confident the results are valid. In particular, they cannot claim the future option of saying, "Well I didn't work on that part. I always had my doubts about that part..."
The issue of co-authors taking credit but not responsibility was particularly highlighted by the Schon scandal and two separate cases involving postdocs in the laboratory of biologist David Baltimore (a Nobel laureate and former president of Caltech and Rockefeller University).

This article on physics postdocs perceptions of the issue is worth reading.

When in doubt ask your supervisor to discuss these issues.

Tuesday, July 14, 2009

Scientific fame (or notoriety) without responsibility

The latest issue of Nature Physics has a review by Mike Norman of the book, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World By Eugenie Samuel Reich. The book chronicles the antics of Hendrik Schon earlier this decade. Norman points out,
"many readers will take exception to how leniently Schon's senior coauthors were dealt with in the book, perhaps because they were willing to be interviewed (those who refused were treated more harshly). In fact, a serious discourse of the responsibilities of senior authors, management, journal editors and referees in the scientific process would have been a welcome addition."

What is condensed matter physics?

 Every day we encounter a diversity of materials: liquids, glass, ceramics, metals, crystals, magnets, plastics, semiconductors, foams, … Th...