Vistas of Potential and Speculative University Inventions

Today’s Wall Street Journal has a cover story on reproducing the results of medical research.  It’s behind a subscription pay wall on-line.  (fwiw, I used some of my expiring frequent flyer miles to subscribe to keep the rest active).  “This is one of medicine’s dirty secrets:  Most results, including those that appear in top-flight peer-reviewed journals, can’t be reproduced.”  John Ioannidis has been arguing this point for years.   Now Science magazine has an issue devoted to the problem, including a new study by Ioannidis.

The WSJ article identifies possible reasons:  confirmation bias for positive results, variation in experimental set up and equipment, and such pressure to publish that data are invented, falsified, cherry-picked, and experiments rigged.   The effects are far-reaching–companies waste millions trying to replicate academic studies, doctors prescribe medications without access to some 1/4 of all clinical trial data, and, the article reports:  “Venture capital firms say they, too, are increasingly encountering cases of nonrepeatable studies, and cit it as a key reason why they are less willing to finance early-stage projects.”  The article cites one VC firm that reported failing to reproduce data in half of its reviews.

Now, other things could be going on.  There could be problems in the reviews rather than in the original studies.  But there’s also this big asymmetry:  the original findings get a big positive splash, and reporting a negative result can mean one isn’t doing things right as well as that the original study is wrong.  It could also mean that the original study is correct, but the paper reporting it is inaccurate or incomplete.  It could mean that the data that are released to back the study are the wrong data, or incomplete, or that the data are not properly documented.  The data could be raw rather than adjusted, or vice versa.  It could be not all the metadata, such as provenance or calibration, are made available.

This problem–which is more than a problem, but rather a crisis–points to a significant limitation of peer review as presently practiced.  A reviewer of a paper can do a lot–check for consistency, evaluate methods, compare with other findings–but a reviewer is not in the lab watching workers, is not doing the same experiment, is not necessarily using the same equipment and reagents, and isn’t necessarily studying, even, the same problem.  At best, a peer-review of this sort is a partial review, an editor’s review, a review of a proposed paper against expectations.  Just as in the research itself, there’s a huge opportunity for confirmation bias, based on the way the structure of peer review is set up.

As the Climategate 1 and 2 releases show, once a group of scientists think they have an inside track to publication, then it is more important to stay with the group than to challenge its claimed findings.  In the climate science situation, the inside group even went after getting adverse reviewers and journal editors fired rather than dealing with adverse reviews and articles within the literature, by debate, experimentation, and presentation of all the data.  Oddly, even in the issue of Science cited above, we find another article by members of “the team” of climate scientists, still working their system.  At stake:  hundreds of millions in research dollars, academic status, and political status.  I wonder how widespread the practice is, of gaming the system rather than working within it.  For theoretical papers, where there is no particular need for data, just equations and models and arguments, peer-review has a stronger position:  check the logic, check the assumptions, examine the proposed results with regard to theory and observations.   Even here, with some mathematical proofs, for instance, the logic is so complex that its hard even for the best practitioners to know whether they have navigated it all properly.

The point is, it isn’t science because it’s claimed, but because it is reproducible, reliable, credible.

For university technology transfer, this is also a huge problem, if not a crisis.  Whether one is sharing or proprietary, if the science is no good–experiments flawed, data withheld or incompletely reported, analysis tools withheld or circulated only to friendly scientists, reports cherry-picking findings, failing to report problems or limitations, then the tech transfer is no good.    We are not talking a paper or two here and there–we are talking about over half of all papers published in elite journals, often with authors from elite schools with “best practices” technology transfer programs and policies, all carefully reflecting consensus views.   The nature of the feeling of discovery appears to be such that we often see what we want to see, and disregard the rest.  It’s just that what we want to see turns out not to have a founding in the kind of stuff that can be used by others, reliably, functioning as described. But it does allow folks to frame up a value proposition, using the reputation of the university, patenting, and a vista of potential.  It takes an act of courage to challenge this potent combination, to play the skeptic to one’s own propensities for pattern recognition.

The vista of potential that comes from seeing what we want to see is a kind of fiction.   Study after study points out the potential of the research for public benefit, for industrial uses, for investment opportunities.  Universities and pundits alike repeat the claim in general terms, almost as a given, that university research is essential to innovation, economic vitality, national competitiveness–whatever it is that might induce politicians to keep the money flowing through the research system.  Research as an industry, as it is coming to be called.  University PR offices often follow up such studies with press releases of their own.  They don’t fact check beyond what is in the “peer-reviewed literature”.  That is, they don’t fact check the science.   And thus they find themselves at liberty to hype the potential–from this finding to a wide open range of commercial products providing new capability, from invisibility cloaks to microrobots running through the bloodstream.  It’s all possible, of course.  Science fiction writers have been producing such stuff for decades, and it serves a valuable purpose, if for no other reason to explore the moral implications of doing things differently to see the moral implications of doing things the way we presently do them.  Pushing the limits of potential also explores the space of what nature offers up, that we can use as tools, and for what purposes–to extend life, to repair, to build, to understand.  All good stuff.

The vista of potential is for technology transfer folks a matter of whether an invention has “commercial value”.  And an invention has commercial value when someone believes a “vista of potential” argument.  And that argument may exist regardless of the data, just as it does in science fiction.   The invention, and the data, only have to be good enough to snooker the patent examiner and any potential investor.  Snooker is the key word here.  It doesn’t mean that the inventors and the university tech transfer folks know something is wrong and try to get a patent issued anyway, but rather that they don’t know if something is wrong, don’t work to find out if anything is wrong, and apply for a patent anyway.   In some programs, where they have a lot of money for this sort of thing, they file first and worry about commercial value later, and with first to file coming, this is only going to increase as a practice.  Investors, they expect, should be coming by in droves.

What does this do?  First, it’s a horseshoe play–they only have to be close.  Imagine a university “invention” with “data” that is the subject of a patent application–specification and claims, and the claims are directed at everything in the vista of potential that the inventors and the university folks can imagine.  The patent issues, but it turns out that the original experiments are not replicable.   This does not invalidate the patent.  If there are other experiments that are close, that end up working in the same space, with the same functionality, why, then, they may well come within the scope of the claims, even though the data on which the claims were built can’t be reproduced.  No one is saying that the data are wrong or falsified–just that they aren’t reflecting something that recurs.  The invention can, one might say, jump around within the claims.  The invention that created the claims was, essentially, fictional, but it was close enough that the claims can hang around and trip up anyone who finally does get it right.  All the better if the claims are broad, the research “basic”, the technology “early stage”.   Then it is as speculative as one can get it.

The rush to patenting early and often from university technology transfer programs means that we have two sorts of inventions emerging, one from conventional inventive activity and one as an artifact of university policies on inventions and technology transfer.  The conventional sort is based on replicable science, sound data, shared tools and resources, with observation (open lab) of the experimental practices, and with observation as well of the efforts to replicate.  The other sort is based on a claim plus the vista of potential, trading off the institution’s reputation rather than the individual’s.  As long as one gets close to the truth, the idea is to get the patent ahead of those who actually firm it all up.

The first approach is historically where academic patent work started, with Cottrell and Research Corporation, with WARF, with the research foundations that were formed in the 1930s and 40s to support faculty inventors.  The aim was to produce a good thing, and share in the benefit for doing so as a partner in an industry process.

The second approach is a product of university capture of the idea.  I use capture the way Josh Lerner uses it in Boulevard of Broken Dreams–that “entities, whether part of government or industry, will organize to capture the direct and indirect subsidies that the public sector hands out” (80-81).   Bayh-Dole can be construed as such a subsidy, and the university administrative response, over the past 30 years, has been to work to capture it for the exclusive benefit of each institution that participates, all the while reciting public benefit and the vista of potential.

Dave Henderson, a baseball player, is said to have quipped, “Potential means you haven’t done anything yet.”  Instead of developing a technology to the point that it is stable, universities rush to obtain patents on unreproduced data, and then hang around to see if the net of claims can catch up someone working a real variation on the theme.  It’s a form of trolling, based on getting close.  It’s not technology transfer at all.  It’s just a scheme for getting broad patent claim coverage, trading on the institutional reputation of academic scientists, sufficient to stand down patent examiners with “ordinary skill in the art” and entice unwary investors.

The proliferation of “inventions” at universities thus would appear to be a conflation of a set of legitimate inventions, sound science, and reproducible data with a potentially much larger set of artificial inventions–call them speculative inventions–that are sufficient to support a patent application, that provide for claims that cover as much of the vista of potential as possible.   These spec inventions serve, in the effort to make money, to close off pathways of independent development.   The effect of such patents is not to promote investment in commercial development, where the costs of development are high and the costs of subsequent imitation are low (though that is a typical claimed intent); rather, the effect is to foreclose development work that would come from a different base (a legitimate one), which now has to pass through a patent claim structure created by university folks looking for a payoff.  Folks aren’t trying to own an invention–they are trying to own a potential path of development, using an invention as the plausible seed event.

The universities would never do this if they could not pocket the money they made.   Were it not for the technology transfer offices trading off the reputations of their universities and reciting “public mission”, more folks would be in an uproar about it all.  As it is, it’s just too hard for the public to imagine that universities could be so bad.  It is a sweet spot if one is a clever fox, but it is also something that federal policy ought to address, if universities can’t muster the courage to do it themselves.  The capture of invention rights has led to a rapid expansion from legitimate inventions to spec inventions, and that in turn is clogging the pathways to development with patents that have no other purpose than speculating on who might finally figure things out in an economically effective way.  Some innovation system the universities have figured out over the past 30 years.  Some way for AUTM to end up, a coalition of shills and trolls trading on academic reputation until that, too, is crusty.  All the worse that they protest it’s public service, and their hearts are, somehow, in the right place.

As for scholarship, we have a deeper problem for technology transfer.  It is not that the individual inventor should be distrusted.  It is that the inventor ought not be required to be summoned to the tech transfer office every time there’s the prospect of an invention.  It is the effort to turn out new inventions based on sketchy science that produces these spec inventions.   It is a failure of patent policy statements to distinguish legitimate inventions from speculative ones.  The university claims all of them, often even if “not patentable”.   The science has to be done before the invention is worth considering, even if that means that the university–or even the inventor–has a shot at making a lot of money from licensing.

The desire to make money is not the problem.  But that desire changes substantially when it moves from an individual who has invented–where it is balanced by many other interests and concerns–to an institution that hires people with the professional purpose to exploit inventions to make money (“and for the public benefit”).   For them, the royalty sharing policy is a formal way of including individual inventors in this institutionally sanctioned effort to make money.   The institution’s conflict of interest policies are then invoked to do just the opposite of what one might expect–inventors are not permitted to make decisions regarding their inventions, because, the policy is imputed to say, they are conflicted between their public duties and making money for themselves.  The conflict is actually between the university’s efforts to make money in a professionally sanctioned way, and the individual’s possible efforts to make money, the latter of which would be bad (it is claimed) because then the institution would not make money (“and the public would be harmed”).  You have to love the logic:  the university argues that if the individual inventor advocated for some outcome *other than making money* this would also be a conflict of interest, as it would be an attempt to influence the university’s making of contracts for a private benefit (even if not for the money).  Some few institutions, like Stanford, bless their institutional souls, allow inventors to place their inventions in the public domain.  That’s a start–or actually, that’s a remnant of what’s left of what was once a highly productive research / invention ecosystem.

Once the fox is in the hen house, then all that fencing is a good thing, as it keeps the hens in, where they can be tormented and eventually eaten.

At the point that a university professionalizes its money interest–as evidenced by making the reporting and assignment of inventions compulsory, drawing a broad definition for scope of interest, and drawing a broad definition for what constitutes an invention–the institution has a conflict of interest that’s huge compared to the issues of money and science that bang around in an individual’s head.  For the individual, there is a question of whether the science is any good, whether relationships with a sponsor matter, or with industry, or whether there might be jobs for students, or someone who would benefit from taking the next steps with the work.  One’s personal reputation is on the line.   If it weren’t for the university’s professional effort to make money from their work, faculty would have *much reduced* conflict of interest issues.

We have a failure here to understand the richness and ambiguity of motives.  A university conflict of interest policy, in the presence of a professional money-making operation like a technology transfer office, argues that giving away an “invention” means that the faculty inventor was deliberately (or naively) spiting the university’s efforts to “benefit the public” in favor of some personal reward that comes from such an unauthorized give-away.   Furthermore, such giving-away of inventions shows disrespect for authority, is a breach of obligations to the university, and is further evidence to present to the public of how selfish and dangerous faculty can be.  It is absurd.  Yet who calls the “ethics” drafting people to account for getting caught up in a stupid handling of inventions, around a documented, systemic failure of scholarship, tied to a dangerous ideology of “conflict of interest” that has itself gone rogue within universities.

Why should industry read academic papers reporting new technology if many of them are wrong?  Why should anyone in government build policy around such papers?  Why should industry come shopping for such technology to professional money-making outfits like tech transfer offices if they are in it for the speculative inventions and not for the legitimate ones, and in any event aren’t checking the data, asking for independent reproduction, and open sharing of data, tools, procedures, and facilities?  Why should venture investors think that a university’s involvement reduces uncertainties when it appears that scholarship–whether for grants, publications, or patents–is loaded with confirmation bias, doctoring of data, refusal to share or permit inspection, and presentation of routinely defective work?

We live in an age where credentialing has been traded on for speculation and fraud.  Risk mark up of mortgages has been falsified, trading on government insurance to make up the losses.   Clever corporate structures have been used to scam energy markets.  We are inundated daily with email from purported lottery winners, government officials, and Nigerian widows all wanting us to share in their worldly wealth.  And in technology transfer, we have the situation in which university reputations, tied to Mertonian norms of scholarly conduct, are being used to shill inventions without anyone doing the deep dive to check the truth of the claims.  It’s another form of credentialing exploitation.  This time, at the risk of taking down university research as a source of reliable, independent, verifiable information–and with it, the public rationale to fund such research.  Folks are playing with fire in a warehouse of fuses, but in the moment, university technology transfer looks like such a success.

Collaboration between universities and industry is not, at its heart, about getting something for nothing (in the case of industry getting patent rights without paying) and it is not code for industry paying for research (though industry might do so).  It is about providing access–both ways–so that work in one lab can be reproduced in another.  That, my friends, is technology transfer, and until something as basic as that has happened, there’s no point in constructing any vistas of potential, unless one is writing science fiction.  Yes, you can get to an “invention” too early.  When you do, no wonder you need a lot of sophisticated professionals.

When the transfer happens for science, we call it verification.  Verification can’t prove a theory, but it sure can demonstrate a method and confirm the operation of a device.  When you have an effort at verification, then you have peer review that matters.  The science and the transfer are interwoven, part of the same fundamental process.  Without first order imitation, you don’t have science, at least not public science.  You can’t extract the transfer (or the imitation, as it were) from the science, and then say that you can put it back, later, after you have a commercialization deal, by “reserving rights” for nonprofit uses.  The transfer has to stay packed into the operation of science.  That’s what universities can do better than any other institution, if they decide to do so–and in droves these days they are deciding not to do so.  Or, more accurately, university technology transfer officers and their direct supervisors in offices of research or finance, assisted by campus counsel, are making rogue decisions this way and it is to costly for anyone to oppose them.  They have a degree of impunity about it all.  I get communications from folks that they would speak out, but it would cost them their jobs, if not their careers.  Interesting times.

When the transfer happens in industry, we call it practical application.  In the industry setting, the question is not simply verification, but whether, when verified, the finding has a use.  That may be use as a research tool, or as an internal procedure, or as an element in a product.   The assessment of a finding for use is the step that transforms a finding into a tool, an asset.   It is when a patentable invention actually happens–when it is new, *useful*, and non-obvious.  Of course, “useful” to a patent attorney doesn’t necessarily mean anyone has demonstrated a use–a use merely has to be claimed that is sufficient to get the patent issued.  But here, for technology transfer from universities, use means that there is 1) independent verification and 2) an application of the findings.  Until you have these, I’m arguing, you don’t have a legitimate university invention, and when you have these, it is also very likely that the invention is not going to be exclusively yours or the university’s.

So you can skip the verification and practical application and create a vista of potential and file a patent application that constructively tries to claim as much of this as possible, and wait for suckers to show up who think that universities are reputable places when it comes to publications and patents, and who buy into this vista and pay for rights and go off to find out how wrong they are, or one can drop the comprehensive policies that isolate inventions, conflate inventions, and focus on the money relationship in the licensing.  Instead one can push the verification transfer (science) and the practical application transfer (utility), and only after these steps, consider whether there are remaining patent rights that are appropriately held by the university (or some other steward) on behalf of a public interest in further development.

For verification transfer and practical application transfer, a professionally stocked technology transfer focused on “commercialization” is precisely the wrong tool.  It is a barrier to the activity that it claims to be able to do itself.   There is nothing wrong with having a commercialization approach to technology transfer.  That’s fine, even interesting.  But there’s a huge problem in making that activity compulsory, having no alternatives to it operating in parallel, and a conflict of interest policy that is used to keep the fox terrorizing the hens rather than separating that interest from the processes of science, which as we can see from the WSJ article and from John Ioannidis’s work, are seriously and frequently compromised as it is.  With less capture and more openness comes greater personal freedom, responsibility, and accountability.  That, more than any comprehensive policy, shapes the morality of money-making decisions, the accountability for indifference to, or suppression of, verification and practical application transfer, and the actions that allow high quality scholarship, and legitimate inventions, to surface and benefit from professional management.



This entry was posted in Literature, Policy, Technology Transfer. Bookmark the permalink.

1 Response to Vistas of Potential and Speculative University Inventions

  1. Pingback: Bad Science and University Technology Transfer | Research Enterprise

Comments are closed.