Replication: The Technology Transfer Problem

Steve Fiori on the SCISP listserv called the list’s attention to a blog post by David Funder, a research psychologist at UC Riverside.  Funder’s post discusses a recent NSF workshop that took up the issue of replication of research results. This issue goes to the heart of a claim about research science–that published findings are generally reliable. But as is becoming clear, peer review is not doing an adequate job in screening publications. And, really, why should it do more than catch bits of problem with argument? How can a reviewer check the raw data, or the analysis tools, or the actual conditions of the experiment, or the other data, from the other experiments and analysis, that’s not reported?

One can also ask researchers to be “ethical” and “diligent” and “smart.” Like Feynman, we can ask researchers not to fool themselves, and then be conventionally honest with the rest of us. But that, too, doesn’t appear to be working. Perhaps it is too easy for us to fool ourselves that we have not fooled ourselves. Or, perhaps it is too easy to justify publishing–no matter hardly what–as essential to not perishing. As David Funder makes clear, however, replication of results is not what science publishers have supported.

Daniel Kahneman has called out problems with replication in psychology research. For Kahneman, the issue is not the original research is necessarily flawed–it may well be that follow-on efforts to replicate are flawed.  The article reporting Kahneman’s concerns also reports on the Reproducibility Project–an effort to reproduce the findings reported by three major psychology journals in one year of publication. But then, when Amgen tried to replicate 53 major studies on cancer and found that they could not with 89% (47) of the papers, something is wrong, beyond the competence of the replicators.

You might see where this line of concern leads for technology transfer. If universities are jumping on the most novel of research claims–the extreme finding, the way out there result, the transformative discovery–and filing patent applications on these, then they are also raising the profile for these claims before anyone has ever validated the studies. Further, by putting a patent barrier around each new, potentially really new, thing, university administrators also are potentially reducing the interest others might have in replicating published studies.

First, it may be difficult to get the full data and experimental set up from folks hot to obtain patents, or told that they must cooperate with the university’s efforts to obtain patents. Second, think about it: if university A has a patent on a finding, then why would someone from university B be the patsy and do the work to validate the work at university A–when A is clearly prepared to clean up on licensing, and B gets nothing for the effort. Third, worse, the thinking may well be just like that for other labs at university A, too–once there is a patent application, with named inventors who might get a share of royalties, and everyone else who won’t, why help? I know, because it’s science and there is a pleasure in finding things out and all, but face it, once there’s an ownership claim on a discovery and a university has taken over management of the discovery to make money (“new sources of revenue,” “economic development,” “entrepreneurial activity,” “return on the public’s investment,” “we must get fair value”–whatever the rationale for making money), others drift away. If you are going to own it, then do the work yourselves. A university patent application can be like a huge bottle of mosquitoes released in a room–research folks move off to other places. No reason to stay.

Thus, even as universities advertise their technology transfer programs, they may well be creating an administrative environment that works against replication of research results. The processes by which every invention is claimed, reviewed, and patented may well also be the process by which the incentive to replicate research is suppressed–not that one should expect to see an examination of the issue showing up in any university licensing office annual report.

There is yet another problem. Even replication research appears to have a use-by date. It appears that for a number of findings, the effect established in the literature appears to diminish over time. Josh Lehrer wrote about the “Decline Effect” in a 2010 New Yorker article. Drugs that were approved for use appear to lose their effectiveness over time. Other areas are also seeing the effect:

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

There is something heartening in all this: that the world perhaps is much more full of mystery than the scientific community realizes. If the claimed facts wane, then perhaps that means that the narrative threads that stitch together bits of finding into plausibilities are wearing out, and there is room for new narratives. Less heartening, however, is the prospect that all those university research labs chasing drug discovery might be “inventing” findings for new compounds that, perhaps for some reason having to do with the narratives that hold the objects of study in place, have a shelf life related not to the compounds themselves, but to how the data are read, and as studies move from carefully controlled to statistically modeled to use in the general population, it may well be that the statistical claims break down. That is, the confidence level for a significance depends on applying an appropriate statistical measure. If the measure is wrongly applied, it does not matter how sophisticated one is with the math.

A few months back, Bart Kosko wrote a response to John Brockman’s question for 2013–What *should* we be worried about? In “Lamplight Probabilities” Kosko points out that scientists use primarily five models of probability, when there are actually a great many. While a given probability distribution might fit a data set across some range of often-gathered values, does it mean that the mathematical fit holds out toward the edges, where the rare events are? Out at the edges, if the “tail is fat” rather than “thin,” then an event that the math says is one chance in a ten million might be just one in a ten thousand. For rare events, how can one tell that the likelihood is off by three or more orders of magnitude? Nassim Taleb works this same territory in The Black Swan. That is, the differences between two mathematical models may be critically in what isn’t happening (much) rather than in what is happening (apparently) all the time. If an event that is observed and reported happens to be rare, there may be no way to tell how rare. There may also, then, be no way to apply properly a mathematical model–anything imposed is just complicated math fitting to patterns for which there’s nothing more than a rule of thumb, and maybe not a very good thumb.

All this means that if administrators have incentives to try to make money from the most wonderful of research findings, then they had better be clear about the narrative space that makes a narrative about research work wonderful. It is not that a finding is judged to be potentially patentable, or even that a patent issues. It is not that a finding is publishable in top academic journals. It is not that a finding has hit all the major news outlets and its researchers are invited to be on The Late Show. It is not that the finding has a “huge” market, or that investors want to pour money into it. All these things are nice, but they are not where the wonderfulness is. To be sure, the niceness of narratives built around these things may well generate money–a license gets flipped, someone pays money for it, and inventors and administrators and patent attorneys split the loot. But in a real sense, that is all it is–loot. Something taken for constructing a story that may well wear down in time, after hundreds of millions in expenditures by a drug company, say, to do the research, get approvals, only to find that the effect oddly wears off until a compound underperforms legacy drugs.

When folks do research, they are taking actions and telling stories about their actions. The stories may be embedded in uncertain application of statistical tests, but they are stories nonetheless.  What if the stories that matter were along the lines of a capable graduate student that tracked down a flaw in the design, or a research scientist who saw something odd in the data set? What if the wonderful story was that work was published with all the data and others set to work improving on it? What if the stories that mattered were not about how important a finding was to the university that hosted the work, or even to those that made the report, but rather in how much effort folks expended to find others who would benefit from working with the findings?

For technology transfer, the problem of replication is critical. For filing a patent application, or even getting a patent to issue, all one has to do is make an argument–insist something works, present a bunch of data, hire a good attorney. To show that something discovered in a mess of statistics is in fact significant, not simply statistically significant–that may take more than a one-off experimental effort, more than even a replication effort. If so much of what is in the scientific literature is simply wrong, as John Ioannidis has demonstrated, then perhaps the information age is not one so much of evidence as it is of assertions, and the assertions may not be much to go on. If so, then university technology transfer, too, is dealing in illusion by holding rights and not allowing widespread use–or any use, for that matter, until someone pays up.

While there may be times that a patent right, established on some special finding, is just the right touch of excellence to spring a development effort, perhaps the soil in which those times grow is one made out of something other than administrative processes to claim and exploit each such finding, just in case it is that right one. Perhaps that sort of mindset–the more it aims to take, the broader its claims, the more assertive it becomes–does damage to the soil in which good ideas form, are cultivated, and grow to become something. The more university administrators assert they have rights to research results–no matter the rationale–the more difficult and expensive it becomes for them to gather anything that matters, the more difficult and expensive it becomes to finance further work, the more difficult and expensive it becomes to build collaborations. Maybe the expansion of technology transfer programs is not a metric of the “success” of the Bayh-Dole Act, but rather of the “failure” of licensing offices to recognize that the more they claim, the less effective they become. Yes, there will still be deals, and money. But those deals come at a considerable expense to the public nature of the science that motivates the work in the first place.

The Reproducibility Project asks, “Do normative scientific practices and incentive structures produce a biased body of research evidence?” We might ask in technology transfer a similar question: “Do normative administrative practices and incentive structures produce a biased body of claims regarding research impact?” Think about it: if technology transfer deals can’t be replicated, then what is the point of the normative processes that administrators have put in place to harvest faculty, staff, and student inventions (and anything that remotely resembles an invention, or which administrators decide should be handled as if it were an invention, even if it is not)? Perhaps all the systems and processes that have been constructed to explain what technology transfer is going to do, or is doing, are like a form of poorly considered math–complicated once one accepts the application of the process, but fundamentally and simply flawed at the point of application. That would not be a disaster, nor would it spell the end of university support for new ideas–but it would open up some mystery in the world, and perhaps allow us to frame narratives that more nearly guide us to take actions that do, indeed, help research have an impact on our lives in the form of innovation that benefits many of us, and not just administrators, attorneys, and from time to time, speculative investors.

 

This entry was posted in Bad Science, Metrics, Technology Transfer. Bookmark the permalink.