Bayh-Dole and Diversity

In discussions of diversity of practice, I encounter an urge to compare programs to obtain metrics of performance. In Canada, a number of universities have inventor-own policies. University of Waterloo, for instance. The immediate gesture is to ask whether Waterloo is *any better* at technology transfer for having such a policy. But does it matter? That is, should it matter to Waterloo that with different metrics (or even the same metrics), another university makes more licensing income? Or signs more licenses? or files more patents? Would it not be enough that Waterloo’s policies (and technology transfer functions) do what they are intended to do at Waterloo? Isn’t it sufficient that Waterloo’s policies and practices are high performing within their local context? Does Waterloo have to intend to do what some other university does?

Dan Ariely in Predictably Irrational talks about decoy pricing and how we tend to judge value by seeking comparisons, so that two prices (or anything else) that are similar will get more attention than some other thing to be compared that has nothing close to it. Folks will tend to take what appears better of the things that can be readily compared. In tech transfer, if you can’t judge the worth of something on its own, then try to find a way to judge it in comparison to something close to it, like the tech transfer program at another school. Anyway, that’s apparently the impulse.

Yet, tech transfer is not merely a function of a technology transfer office’s efforts. In practice, one might even argue, that most tech transfer happens despite those efforts. This isn’t to say tech transfer folks aren’t needed. In many cases, they are helpful, even critical. The inventions are made in research that is not controlled by tech transfer. Inventions are reported as a function of investigator and inventor choice, not so much because of the fine “training” that is provided on the topic, regardless of whether the “training” is a veiled threat about compliance or a veiled come-on for greed. Patents are filed based as much on available funding as on merit. And in any event, the number of patents may not have much to do with effectiveness of the program, even if a lot of patents might increase the changes that some few “hit it big”.

Stewart Kauffman, in Investigations, suggests that life is “in the ordered domain, on the edge of chaos.” That is, life-like systems have stability, but also actively explore diversity. Another way, they don’t remember too much or worry too much. From quite a different direction, Jared Diamond, in Collapse, describes various societies that fail as a result of changing conditions–such as in environment or trade. One point Diamond makes is that the values that sustain a society in terms of status and function may not be the ones that give it strength in changing circumstances. Greenland settlers would rather stave with their livestock than eat seal meet with the Inuit.

From these discussions, one might turn to research that aims to discover, to invent, to validate, and to challenge. For this work, engagement with community is, one might expect, not a single, stable function, but rather many diverse functions. At least that might make a starting premise. Instead, the working premise in US university technology transfer appears to be that a single model is best. Universities claim ownership of research inventions, file patents, and shop these to industry. Oh, there’s a lot of different ways to shop. But the goal is to make money from an ownership position. The cover is public benefit. The pragmatic administrative equation is that what’s good for the university is the public good. And the simpler and safer the better.

The point of diversity is that it spreads out the search for the new, beyond the “adjacent possible” (to use a Kauffman term again) and beyond the values that got us here. Innovation provides not merely the optimization of opportunity under the present conditions, but also the resiliency that comes from multiple ways of doing things, of having back ups at hand arising not from legacy but from exploration. The search for new local maxima as solutions to a social challenge will look inefficient, even wrong-headed, to folks with a fixation on improving the practice at hand. The desire to optimize is one of moving up the existing local maximum, to stay on the road already set out, to do things better and quicker. It’s a dream of clever shop practice, holding everything else constant.

All this may seem far afield. Creating new programs out on their own is not how comparative metrics get made. When we created a new software-directed practice at UW, there were patent folks who hated it. Software was easy to manage. Faculty would be confused. Policy didn’t allow it. It made the patent-first approach look bad. Poof.

One can look at a comparative metrics as a form of hazing, an effort to pull practice into a standard state to substantiate one’s claim to “best practices”. In this, comparative metrics actually serve as a conservative argument about how things can get done. If you don’t like change, or competition on practice, or exploration, then create a comparative metric that shames folks for not being like you. Put it all in a list. Show where you rank. Then it’s all about execution and efficiency. Arguing for other goals for IP is an excuse. So is other exploration of practice.

In all of this, it serves a conservative position relative to innovation management to argue that Bayh-Dole requires university ownership of inventions, or that the intent of Bayh-Dole was to create university patent licensing shops to make discretionary money for university research and inventors. That is, a conservative position about patent practice that greatly reduces the manner in which research innovation may come about. By merely restricting the pathways by which innovation opportunities can be addressed, the thrall of comparative patent licensing metrics really can and does stand in the way of innovation. When you need a new practice, you don’t have it, nor do any other schools, and there’s no opportunity to build it, so the practice is lost, not merely some local opportunity to collaborate.

In all this, one might say: it is useful to have a national approach to innovation that explores multiple strategies for using patents to promote use of research originated inventions. It ought to be a goal of national innovation strategy that there are diverse, competing ways to deal with patents. For every start up or product, there should be a commons or standard. For every dollar of licensing income, something dedicated to US industry without charge. For every corporate-style play, something moving through individual action. Why is there so little of this diversity in US university IP practice? Is it the network effect of having everyone doing the same thing, so it’s easier to hire and train folks to staff IP offices/ Is it the satisfaction of being able to make an easy comparison rather than a hard one, or to design practice for what’s needed rather than to do what everyone else has done?

One thing I’m sure of: it’s not Bayh-Dole that restricts the diversity of US university IP practice. It’s other things. Maybe it is as simple as: US universities have become administratively dull places to work. Is it that, that university administrators are generally unsuited, for the most part, to even think about innovation practice, let alone the means by which a breakthrough network operates to deliver something new to community? Not sure these are merely rhetorical questions, carrying their own answers. Or whether there are other issues about.

Another thing that I’m not so sure of, but think it is worth mentioning: comparative metrics are not particularly helpful in increasing the diversity of IP practice. Once it comes down to the most money with the least investment, there’s not much more to it. Any old standard model will work. Pick one that doesn’t require any smarts at the top to work it. That way, one can tell how things are simply by looking at the comparative metrics. Is that how we expect to run innovation programs? Seems to be at a lot of places.

The point of diversity in innovation practice is that we develop the techniques we need before we need them, we develop expertise that can create and adapt what is needed for local conditions rather than trying to fit everything into a bit of model built out of a fifteen year or so biotech speculative investment window running on decade-long product cycles aiming for billion dollar a year products backed by a few crucial patents. For this, we need to mine public policy like Bayh-Dole for its potential rather than for the narrowness of comparative, administrative comfort zones. Quite apart from the conventionally metrics of technology transfer, a good diversifying metric would be the number of US IP programs reporting different metrics to characterize their operations. At a recent SSTI conference, one session took this up and had assembled a list of was it 200 or more? metrics that offices were reporting in their annual reports. It would be good to see such metrics begin to reflect program goals that did not have to live or die based on comparisons with other schools’ practices. One way to do this is to construct a set of metrics tuned to the stated objectives of Bayh-Dole and Circular A-110. That would make for some interesting innovations in university accountability for all those patents they claim ownership of.

This entry was posted in Metrics, Technology Transfer. Bookmark the permalink.