Neophilia

Alok Jha, writing in The Guardian, has an extended article on the growing problem of bad science, with particular attention to psychology and medicine. One bit of worrisome news:

There are indications that bad practice – particularly at the less serious end of the scale – is rife. In 2009, Daniele Fanelli of the University of Edinburgh carried out a meta-analysis that pooled the results of 21 surveys of researchers who were asked whether they or their colleagues had fabricated or falsified research.

Publishing his results in the journal PLoS One, he found that an average of 1.97% of scientists admitted to having “fabricated, falsified or modified data or results at least once – a serious form of misconduct by any standard – and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% for falsification, and up to 72% for other questionable research practices.”

A 2006 analysis of the images published in the Journal of Cell Biology found that about 1% had been deliberately falsified.

Perhaps the good news is that only 2% of scientists are fabricating results and only 1% of images in a major bio journal were falsified. The bad news is that many admit to questionable research practices, and even more suspect their colleagues of bad behavior.  Jha cites another study finds that a 12x increase in retracted papers in scientific journals over the past decade. These are big numbers.

Misconduct is only part of the problem. There is also a failure of peer review to identify errors before publication. Having served as a sometime reviewer for conference papers, I can say it is virtually impossible to check results merely by reading a technical paper filled with claims. One would have to have access to data sets and analysis tools, have a clear account of methodology, and know what data has been omitted from consideration. One can make guesses, and ask for clarification, but really, journal peer review is not about replication of results. One reads for the presentation of evidence, the structure of the argument, the methods of analysis, whether the data presented support the claims.  But what about the data that are not presented? And how were the experiments or measurements taken?  And were there any bugs in that custom analysis software?  Hard to say, looking at a manuscript.

A third problem is what Chris Chambers calls “neophilia”–journals are primarily interested in publishing novel results, not reports of failure to reproduce the work of others or negative results. Perhaps that is also a result of the increasing consolidation of scientific journals with for-profit publishers, which may not have the strongest of commitments to serve science so much as serve it up. I expect editors would deny this sort of thinking, but then where are the papers that report negative results?  Yeah, reports of negative results don’t sell subscriptions–and they make the peer review apparatus and editorial judgment look a bit bad, too.

Why bring this up? Because the integrity of science is fundamental to technology transfer and university licensing programs. If data are being fudged and methods are sloppy and results cannot be replicated and publications are selected for drama of the new, then where does that put folks that put technology on offer for license, or folks that might want to acquire that technology, or policymakers who think they see an important new direction for research and public policy to follow? In a lot of doubt, that’s where.

For technology transfer practices, this means that there is more work to do. Merely filing a patent application does not mean that a technology actually works as claimed.  Peer review certainly is no help, nor are the preferences of journal editors for neophilic reports. Technology transfer also appears to have a love of neophilia. One sees “success stories” and invention summaries making claims and citing potential for commercial products, but there are not statements about patents not issuing, or subsequent data not supporting the claimed operation of an invention, or failure of companies reviewing discoveries for possible licensing of not being able to replicate the results or make the device work as advertised.

Such a thing might be understandable where the patent owner is a company and the deal is a transaction buried in the transactions of an industry. But where the patent owner is a university, technology transfer is trading on the reputation of the university for integrity in the management of its research programs. By “integrity” one means–having a degree of candor to present not only the positive results and interpretations, but also to provide sufficient context, tools, and qualifications that others can make a reasonable decision regarding acquisition and investment. The standard, in short, is and should be higher for universities when it comes to technology transfer. Neophilia would appear, however, to be endemic in university technology transfer–favoring the positive, suppressing the adverse and contrary, providing only so much context to nudge a transaction or a positive view of a licensing program. Yes, put one’s best foot forward–but remember, this is a university, and if the other seven feet are limping along behind, they need to be trotted out as well.

It would appear that dealing with bad science is a huge emerging problem for the integrity of university technology licensing programs. Neophilia is just one of the ways that bad science is allowed to persist. If technology transfer exists to promote the use of university discoveries and inventions, then it is in the experiences of that use that we find out what is really worthwhile, and what does not withstand scrutiny. If a university holds patent rights, and as part of “marketing” those rights for investment makes claims with regard to the utility and value of inventions, then it cannot afford to take an “it will blow over” attitude when the science doesn’t hold up. There should be a prompt public accounting that is institutionally initiated. Is there liability in doing so? Absolutely. Is there risk to the reputation of the university and the licensing program? Of course there is. But the liability and risk to technology transfer generally is much greater. If companies, investors, entrepreneurs, and the public cannot rely on universities to be straight on the facts, whatever they turn out to be, then we have the potential for creating a deep distrust that is not easily repaired.

 

 

 

This entry was posted in Bad Science, Technology Transfer. Bookmark the permalink.