Moose Turd Pie, and No Good

The Economist ran a cover story last week on “how science goes wrong”:


An argument of the piece is that journals like splashy claims but don’t have room for studies that announce validation of prior reports. The article goes on to consider problems with the use of statistics to claim a 95% “confidence” in findings–leaving, potentially, 5% of findings wrong, but worse, because of how false positives and false negatives accumulate, more than half of published findings are erroneous. And that’s just within the confines of statistical models, not to mention sloppy work, cherry-picked results, fabricated data, and the like. It’s not that all academic science is bad, or that scientists are fine with this sort of outcome, but it also is not the case that we are dealing with only a tiny sliver of badness. Many of the studies in question are in the elite journals. One can say that peer review has failed, or there needs to be greater deposit of data and access to tools–all true–but there are deeper problems than these, especially, for how academic science is conducted, promoted, and documented.

The LA Times has followed The Economist story with this headline and lead:


The turn that the LA Times puts on this mess is that open access journals are somehow more at risk for publishing flawed work. It would appear that there is a rear-guard action by the publishers of major journals to ascribe the problem to open access and lack of the “quality check” that the leading journals claim for their work. Yet John Ioannidis, a Stanford professor doing some of the key foundational work, focused on the elite journals to show they were publishing many useless papers.  

The elite journals are proud of their “peer review” processes, and no doubt many papers are improved by taking into account such comments prior to publication. However, peer review is nothing compared to the effort to replicate studies. Peer reviewers cannot be expected to have the time or resources to mount an independent effort to run the numbers to confirm a study’s claims. The open journals work the system differently:  folks can publish what is plausible, and then readers, not pre-readers, offer comments that refine the paper, qualify it, confirm it, or show its flaws. Even here, however, it’s mostly readers poking at things, not folks running the experiments again, or re-evaluating the data based on the same, or different, tools.

The problem comes to technology transfer in multiple ways. Companies expecting to use published findings have to put real resources at the effort to validate the studies. They are finding, in some areas at least, that they cannot generally do that.  There are plenty of reasons for that–the articles do not present all the information necessary for replication; the authors do not deposit the full data sets, or in a readily usable format, or do not supply the analysis tools developed to examine the data. There is also the tacit knowledge, the interpretive judgment, and the like that may inform how experiments are designed, apparatus assembled and calibrated, data collected and the like. And there is always the problem that the replication effort, even with great information from the originating lab, is flawed by any and all of these issues–the replicators lack skill and care, or don’t want the study to be validated.

For all that, there are other things going on. For one, universities often base their decisions of promotion and tenure on the number of publications, or of “quality” publications, rather than on the validation and use of the findings. If all that a publication is to do is spark a conversation with envy, then fine, a scientific publication is just another sort of public rhetoric aiming to get people talking about the author’s ideas, not others. That is a kind of impact among academics. But it’s not much for technology transfer.

For technology transfer, the goal is use, and for that, one aims to transfer sufficient infrastructure to support use in a new venue, by a new lab. For that, independent validation is a huge step. A scientific claim worth the time to validate, and which is validated, is important to technology transfer. One would think that independent validation and use would be the hallmarks of great scholarship–not the number of publications, not the number of citations–that’s all, really, crap. That’s not doing anything other than putting on a show. A coterie of academic authors can pump their statistics by citing each other, and making it appear that their work is thus “central” if not a “consensus” and thereby “true.” More crap, if not corrupt, but in that playful way of academics, as if nothing really matters except career advancement. Certainly no one is called in by the dean for selective citing prior literature.

Technology transfer establishes a claim on scientific publication. The argument for technology transfer is that use is a primary outcome of scientific discovery and invention. “Practical Application” is the focus of the Bayh-Dole Act, to “use the patent system to promote the utilization of inventions made with federal support.” It’s not about “commercialization” or about making money, but about use. The challenge in transfer is that it often takes a lot more than a scholarly article to explain how a new finding was discovered, how it might be replicated, and how it might be used, extended, and applied. Certainly it takes more than a patent with obtuse language, or a license that releases practitioners from the threat of litigation for using a published discovery.

And here we get to the problem of university IP policies. Back in the olden days, when I was but a wee lad, most academic work was made freely available. Not a lot of it was patented or otherwise claimed, faculty for the most part decided when something was right for a patent approach, and chose the strategy and means for pursuing that approach. The US government, to the extent that it funded faculty research, in some cases took an ownership interest to ensure that everyone got access to an invention, or to deal with the problems of nuclear energy and weapons systems without the interference of commercial interests or those who had an interest in disrupting commercial or open competition interests with monopoly positions. Most published stuff was open, free, available. One could read a scholarly publication and act on it. Times have changed.

First, patent law has expanded the scope of subject matter. Software and business methods are patentable. So is DNA (if synthetic) and all sorts of living systems. So are business methods. This isn’t the place to worry if these things should be patentable–the point is simply that stuff that would have been beyond patenting is now mainstream.

Second, as a result of a dispute between a faculty inventor (Madey) and a university (Duke), a court threw out the “research exemption.” Previously, there was a working presumption that anyone could work with a patented invention to evaluate its properties, to verify that it does work as claimed, and in particular universities could do this. The court disagreed, arguing that universities were in the business of making money from research contracts, and more money from licensing patents, and had no research exception–not in the law, not in dicta, not no-how.

Third, university administrations have gotten into the patent business. Rather than being selective, and holding a few inventions because they were matched with institutional strengths or should be made available broadly administrators have adopted a comprehensive approach, asserting ownership of pretty much everything, and then trying to sort out what can be used to make people pay, and what is worthless for such a purpose. The policies now read, not “we own what you decide should be owned” but rather “we own regardless of what you think about ownership.” Again, if you see a difference here, then you understand what is at stake. Combine an expanded subject matter for patents with the loss of a research exemption with a greatly expanded claim on faculty scholarship, and one has a recipe for moose turd pie.

Take it one step further.  It is not merely a matter of the volume of inventions reported, or even the strangely expanded definitions of “invention” that universities introduce into their policy claims–stuff such as “non-patentable inventions” or “tangible materials” or “know how.” The problem is in the holding of institutional ownership in stuff that otherwise would be published without any such claim. The IP policies create ownership where there would be none, or a release of interest, or a generic, assumed license.  The ownership claim made broadly transforms scholarly publications from one of teaching function, however flawed, to one of an advertising function.  The difference is profound.  If a publication intends to teach findings, then others should expect to be able to use the those findings.  Publication carries an implied license to use.  Publication should exhaust certain proprietary claims on information and the practice of that information.

But if a publication is merely advertising, then it is just more spin, a flashy thing to get people’s attention. It does not matter, really, what is claimed, so long as it is plausible. A publication is just “puffery” or “bluster”–nothing really that needs to be accurate or replicable. In this way, university IP policies have fundamentally undermined academic publication. The change may have been silent, like a meteor shower while you sleep, but it has happened nonetheless.  Institutional ownership of scholarship undermines the conventions of scholarly publication, including an expectation of reliance and use, and replaces these with the requirement of licensing (and payment) before any such use.

I have previously outlined rights around a research invention. Here it is again:


Evaluation of


Research on
Research with


Have made
Import (for internal use)


Offer for sale
Import (for sale)
Sublicense (for manufacture, interoperability, cross-license, standards)

Universities routinely claim that they reserve rights in exclusive licenses for “non-profit or educational” use, or “non-profit research.” However, universities do not announce such licenses when they claim IP, nor do they announce these reservations of rights when they enter into exclusive licenses, nor do they generally make it clear that the reservation of rights extends beyond the scope of the institution itself (to include other institutions or practitioners or companies). Yet scholarly publication should require a university making ownership claims to grant a general public license to all claimed IP, at the time of publication, for the “evaluation of” the publication’s claims. This general license, one might argue, is implied in scholarly publications from university faculty, when an institution claims ownership. But I doubt I would find many university administrators who would agree.

One might go further and argue that universities, if they are making comprehensive ownership claims to faculty scholarship should restore the research exception for all evaluation and research uses of a published account of an invention or discovery or research tool. Anyone–without regard for tax standing or presumed intent–should be assured a license without formalities to make and use an invention for the purpose of conducting research on the invention, and for the purpose of conducting research with the invention–using the invention in the manner reported or suggested or implied by the publication.

In this, it is not enough to extend a public license to evaluation or research use for the host institution (meaning, I suppose, confirming that other faculty within the institution have the administration’s permission to work with what one of their colleagues has developed and published). It is not even enough to expand to “non-profit” or “educational” or (worse) “non-commercial” use. If research is to be advanced, then everyone should have access without formalities. This argument is not one that most university technology licensing officers will accept. They railed at it at the Nine Points to Consider Meeting, for instance:  they did not want to consider it, and did not want anyone else to consider it. Yet, for academic publication to mean what it used to mean, such freedom, granted upfront and publicly by the institution claiming ownership, is essential.

It makes sense that universities promote internal use, practice use, field use as well. Such use, especially in the case of methods, does not require commercial productization, monopoly rights to support “investment,” or diligent threats to keep field use from happening until a product is available (“to preserve the market opportunity”). But even in granting scholarship and research uses, universities would set an important example and would begin to restore some of the freedom and incentives that have been lost with the expansion of the scope of IP and the expansion of comprehensive claims of institutional ownership.

Consider the “ecosystem” of university research. Even if one university “allows” other universities and their research faculty the benefit of a non-exclusive license to published results, they have no standing to collaborate with their for-profit (or even non-profit, non-university) colleagues on such results. Furthermore, no inventions that they make based on the previously funded findings can circulate for practice without permission from the administrators at the originating university claiming ownership. They are exposed to claims of contributory infringement (teaching others how to infringe by teaching an improvement or application) and expose their collaborators to all sorts of liability. Worse if the originating grants an exclusive license to a company together with the right to pursue infringement.

The university then is establishing a business model in which its profit interest is tied to litigation against other universities and their research and technology transfer collaborators. The local model is antagonistic to the national objective. If each university pursues this local model, the overall effect is one of shutting down vast swaths of collaboration, independent examination, validation, development, and application, and as a consequence shutting down even greater swaths of interest in the original claimed work. May as well ignore it and do something else, since the original work is destined for some exclusive license, or for a sufficiently long delay that by the time it is licensed non-exclusively, it will be worthless and useless.

The present practice of university IP policy is destroying national technology transfer opportunities, even as it hypes its local money-making successes: it does so at the expense of the national ecosystem. It’s a travesty of self-interest dominating community interest. All the worse that the self-interest is being asserted by social institutions that should be the stewards of research, not the profiteers.

In the Teece model, universities have claimed they are the “innovators” when in fact they are the stifling “infrastructure.” Claiming everything early and often kills the national university research ecosystem and with it the utility of scholarly publication other than as advertisement and career self-promotion. These are real, even acute, effects of present university IP policies.

There is one last effect that I have not seen noted, but is also potentially significant. Imagine a university investigator with a research finding that looks promising. In the current condition of publications, it has a good chance–maybe better than 50%–of being unfounded. However, the university IP policy demands that the finding is *owned by the institution*. Moreover, to cement that ownership claim, the IP policy demands that university file a patent application. So off the finding goes to a patent attorney. Now if the attorney is doing his or her typical job, he or she (let’s use “they”) will draft claims specific to the promising result, and then will expand the claims to include anything within a class of results as broad as can be justified by the prior art in hand.  It will be up to the patent examiner to argue for narrower claims and find the prior art to back it up.

Consider the effect, then, on the patent literature. A university could claim a broad package of patent rights based on a finding that turns out to be unworkable, erroneous, wrong–except that it is plausible, and defines a whole class of other findings that it also claims to control–and may very well get past a patent examiner–which may include something that is workable. The result is that the university uses the mere prospect of an invention to claim work that it never did, that it does not know how to do, that its patent specification does not have a basis to teach but can be anticipated, implied, or claimed anyway with clever drafting.

This exploitation of the patent system is a way to interfere with the work of others without demonstrating that one’s own claimed findings actually hold up. Doing this kind of thing repeatedly, across whole domains of research, creates minefields and poison where there was never any dispute, nothing to defend, no basis for an assertion of ownership.  When each university does this–or even a few score do it–an area of research is all but destroyed. What could have been a park is a maze of trenches and barbed wire.

The problems with peer review and bad science are only the tip of the problem. University IP policies and technology licensing practices have over the past 30 years done a great deal to contribute to the damage, and have to be part of the reform of academic science. A good starting place is to grant a general license for all internal uses, and grant an option to all users “post-users rights” (in contrast to “prior-user rights”) to sell/offer for sale/and import. Any other commercial licensing ought to be in the context of these defaults. If an exclusive license is on offer as well, it comes with a reservation on behalf of early adopters and those operating for their own internal use.

This entry was posted in Bad Science, Freedom, Innovation, Policy, Technology Transfer and tagged , , , , . Bookmark the permalink.