Do Results Really Matter?

Steve Fiore has started an interesting discussion over at Science of Science Policy’s distribution list.  One article he cites shows that negative results are disappearing from scientific journals.  Another shows that a reported effect appears to decline over time as people attempt to replicate results.

To this list I add two more.  First, John Ioannidis has shown that a large percentage of articles published in major journals turn out to be wrong.  This is not just scientific progress overtaking early findings, but that the findings themselves are simply in error, misreported, poorly analyzed.

We also have to contend with the idea of “post-normal” science.  Consider:

In the sorts of issue-driven science relating to the protection of health and the environment, typically facts are uncertain, values in dispute, stakes high, and decisions urgent. The traditional distinction between ‘hard’, objective scientific facts and ‘soft’, subjective value-judgements is now inverted. All too often, we must make hard policy decisions where our only scientific inputs are irremediably soft. The requirement for the “sound science” that is frequently invoked as necessary for rational policy decisions may affectively conceal value-loadings that determine research conclusions and policy recommendations. In these new circumstances, invoking ‘truth’ as the goal of science is a distraction, or even a diversion from real tasks. A more relevant and robust guiding principle is quality, understood as a contextual property of scientific information.

What do we take from these kinds of reports and arguments?  That there’s a lot of spin out there that it is often wrong even in elite journals, and for some folks involved in “issue-driven science” the spin is more important than the facts, because, well, the facts are messy and politics is, er, dunno, something about “real tasks” being something other than “truth”.

From all this, one wonders if the remarkable lack of productivity from university technology transfer is a function, not only of the limitations of a compulsory ownership + linear model of commercialization but also of something deeper in the conduct of university research–that folks are not only withholding adverse findings but also getting a lot of their positive findings wrong, and when it comes to public policy folks are openly saying–it doesn’t matter, a concern for the “truth” is just cover for loaded political leanings, and what really matters is that the cover of science is used to lend authority to advice based on one’s beliefs or political leanings, because “truth” is just a loaded term for beliefs and political leanings that one doesn’t accept, or something like that.

Whatever businesses might do with their own marketing spin, when it comes to adopting new technology, they are overwhelmingly on the side of truth.  They don’t say, “Hey, just make it look good, make us feel good, no matter if it doesn’t quite work as well as you lead us to believe, just get us out there on a limb, we’ll do fine.”  No, what they want as much as anything is *candor*.  They want the good, the bad, the iffy, the roadmap, the doubts, the regrets, the roads not taken, the alternatives.   And *candor* is one thing that gets left home when the university family goes for a little outing in the patent license mobile.  Then it’s about what’s in the contract, and what’s there is just enough to warrant compensation, and nothing that would create an obligation to come forward with all it knows, or that what it has put forward is reliable.  There are disclaimers for that.  Everything is at the licensee’s risk, including dealing with a university at all.   Given the problems in the academic literature, one might think university licensing operations would make a virtue of candor, would construct adoption relationships that allowed for testing and evaluation *before* any contractual arrangements were firmed up involving diligence or disclaimers.   Wouldn’t that be something?

This is one thing that’s made possible by a Make-Use Commons, since it creates a set of expectations without creating a contract with a bunch of artifact terms and conditions in it to increase the uncertainty beyond the uncertainty already in the scholarship itself. In many ways, a university license contract is a bunch of requirements to protect the university from the fundamental knowledge that the technology itself is questionable stuff. Get the questions answered first, and then there is time for building a more substantial relationship.

Is it any wonder, then, in the present university IP climate, that companies might take a long pause before jumping after some new published finding in an academic journal or on a “tech available for licensing” list?

The challenge for technology transfer then is not to “protect” research events, but to support the circulation of research practice.   That is, the emphasis on technology transfer would be to provide instruction, and resources, that would allow others to practice what has been found.  In this approach, technology transfer would be a form of extension, and the question would be:  is this something worth teaching to others?  If so, what’s the minimum set of resources needed to do that really well?  And who wants this instruction?

This entry was posted in Bad Science, Technology Transfer. Bookmark the permalink.

One Response to Do Results Really Matter?

  1. Pingback: Limits of Causation Models in Technology Transfer | Research Enterprise

Comments are closed.