Explaining an Emblem of the Linear Model

Gene Quinn at IP Watchdog posted last October a nice essay (h/t to François Stofft at the Linkedin International Technology Transfer Professionals group) on the problems of accelerating technology transfer by federal fiat. The new programs that have been announced by the White House are good as far as they go–helping small companies with a web site stocked with information, providing IP assistance, making it easier for agencies to partner with private concerns, asking agencies to count the number of patents as a measure of their research productivity. Well, maybe that last one is screwball.

However, the thing that caught my eye was a powerpoint slide that he attributed to Linda Katehi, the Chancellor of UC Davis:

This is a wonderful emblem that aims to rationalize technology transfer. I have always found this sort of thing troubling.  If the diagram is just an emblem, a pretty picture that communicates that there is order in the world, recognized by administrators, and the emblem reminds folks of this, then perhaps. But if anyone actually thinks that the picture depicts something out there in the world, and aims to base policy or practice on it, well, then we have some things to consider.

Let’s start with a few problems. First, the symmetry. Everything on one side is “public” and on the other side “private.” But we don’t know what these terms are referring to. Is it  money? facilities? research activity?  inventions? It may be as simplistic as depicting the idea that inventions claimed by public institutions have to be licensed to companies. But even that misses the point that inventions claimed by public institutions also ought to be licensed to other public institutions (and this rarely happens, despite the protestations by some university officials that it is their policy to reserve rights for such things). It also further misses the point that inventions, before they are claimed by public institutions, are privately owned by their inventors. If one merely released the public claim on the invention, then it would be on the private side of the diagram to start with!

We can also ask whether “public” includes private universities. If it does, then “public” must mean something more like “research funding from the government.” The blue line arcing downward and to the right must mean a decrease in government funding.  But oddly, it just goes down without any increase from other sources of income until midway through the “Valley of Death.” Lord knows why it is called that, but this is also labeled “licensing”–the most useful correlation in the diagram, though I doubt it is intended.

This drop in government funding is disturbing. Every year, the government puts out some $40b in research support for universities, often continuing support in areas of emphasis, like cancer research or batteries. Why then is the blue line going down? Does it mean that the government loses interest before private industry gets interested? It appears that private industry (if that’s what “private” means) only gets interested enough in stuff to put money into it once “licensing”, or the “Valley of Death” (same thing) has begun. That seems terribly odd to me. It would appear that the government funds less and less, and then stops once something has made it through “licensing” and the private sector takes over. It isn’t this way at all, but I can’t see any way of reading this diagram for what actually does happen.

Well, now, let’s say that the arc is also just emblematic. It really doesn’t mean that funding changes in research. Perhaps it is supposed to mean that the government funds more basic research than it does applied research. But that’s not true at all.  The government funds way, way, way more applied research than basic. And lots of the applied research is funded collaboratively with industry, which is also funding applied research. Microsoft, for instance, by various estimates, funds somewhere north of $6b in research per year, much of that applied. That swamps out the entire sponsored budget for the University of Washington, which at around $1b is the subject of some pride.

Or perhaps it is that the government funds more basic research than private industry funds. This may be true, especially if we revisit the definition of “basic research.” The best compendium of such definitions that I know of is here. Basic research, in its early uses, meant research that was done to increase knowledge and not for any particular commercial application. In some early formulations, even, if you can invent, then you are not doing basic research. “Applied research” asks whether a given body of knowledge might be useful for a particular purpose. These definitions are baked into federal contracting, in the FARS, in the OMB Circulars, and even into the regulations implementing Bayh-Dole (see 37 CFR 401.1, where a federally supported study proposing to find new knowledge is distinguishable from a closely related private effort to apply that knowledge to a practical application).

We can add that “fundamental research” in federal definitions is basic and applied research, where the results are made public. From this one can see that the tax standing (non-profit, for-profit) and charter (agency, company) and funding sources (government, private) don’t really have anything to do with fundamental research. A public university using federal research funding can withhold results and step out of fundamental research (and squarely into an EAR/ITAR miasma).

Even so, the distinction between “basic” and “applied” looks pretty superficial. It’s the kind of thing one might come up with for clerical reasons, or to put a face on curiosity when everyone demands folks to be practical and purpose driven. In fact, Benoit Godin has shown that the idea of moving from “basic” to “applied” arises from a misreading of an early report to Congress by the NSF, which organized its funding into groups with these column heads. From this we get the “linear model”, which is so familiar, so apparently reasonable, and pretty much useless, if not downright damaging. As I’ve tried to point out in a few essays, the linear model is a device that plays things backwards. Once you have got a successful application of something new, you can trace back to various sorts of research that appeared to play a role, and then you can construct a narrative that runs the events forward, confirming the model.  You just can’t build research policy or practice from such a thing. Or, well, you can, obviously, because people do that. But it’s policy worse than the unknown, because it imposes a foolish order and breeds false confidence among folks aiming to “manage” research.

In many circumstances, the applied research may come first, to be filled out with basic research later, to provide an explanation or a theoretic basis for the observed behavior. This was the subject of a running debate in industry a century ago–do you work on a new product until you run into problems and call in the scientists, or do you set up the scientists in a play area and wait for them to uncover something, and call in the applied folks? It would seem that it works either way, but if the experience at PARC is any guide, when the scientists play, they are much more likely to find things that your own organization simply can’t use. In the case of PARC, ethernet, postscript, graphical user interfaces, the mouse, Steve Jobs, among others. One might say, while applied research may generate a lot of variation around a theme, with occasional oddness, basic research means something leapfriggin’– radiating out in any unexpected direction one might imagine. It’s tough for anyone to specialize in the whole world, so that each new thing makes enough sense to “market” at all.

For all that, one can be curious and practically minded all at the same time, working to study the stars by building very practical extra sensitive radiation gathering instruments.  Basic? Applied? Applied to Basic? One might add, if the hallmark of basic research is that it is not done with a particular commercial intent, then all those “commercialization” whizzes running around university labs trying to “change the culture” so that folks are way overstimulated by the prospect of making a buck are undermining the very idea of basic research. One might think that doing so might in fact lead to a problem of integrity, and a diversion of focus from a determined effort to get at new knowledge (typically by trying to disprove a claim or explanation) to stuff that is “just good enough” to pass for new knowledge.

Returning to our diagram, we can consider the introduction of “Translational Research.” This is an odd one. In its basic formulations, “translational research” had to do with building collaborations between clinicians and theorists, so there was better problem definition, greater awareness of observational and empirical data and constraints, and therefore (in theory) a greater potential for the uptake of new knowledge in the practice community. In the form used here, however, it’s clear that someone has been rationalizing about the idea and is using it to mean something akin to “applied research.” More so, it’s applied research that is divorced from practice.  Industry is way over there on the right side of the diagram. The idea presented by the diagram is that scientists do work to learn new things, isolated from commercial concerns (and ignoring those durn technology transfer officers), and then more scientists try to figure out where those new things might apply to industry. Then, armed with patents that look like Angry Birds, they lob volleys of investment and infringement into the piggies that work in industry.  This, then, is why “licensing” is called the “Valley of Death.” The diagram is starting to make some sense!

Translational research does not follow basic research–even less so than applied research follows basic research.   One might argue that translational research aims to do research in a way that differs from the isolated scientist working for years, V’Ger-like, on a problem that may long ago have gone obsolete in practice, bug-fixing Win95, so to speak. Putting basic and translational research side by side therefore might indicate that they are competing modalities of research, but there’s nothing to indicate this in the slide. It would appear that an idea has to pass through translational research in order to be lobbed into the Valley of Death. The diagram posits that from this lobbing action company interest arises. Thus, literally, the centrality of “licensing.”

The function of licensing, co-joined with the Valley of Death, occupies the center of diagram like old maps of the universe put the earth at the center. Here, we find licensing occupies that role, and like with the old maps, where the earth was the place of dross to which all bad things eventually descended, something similar must be at work for “licensing.”

Why is licensing so important to “innovation”? Clearly, there is rhetoric at work in this diagram. Someone has put licensing in the middle to show that it is important.  It is the bridge across the Valley of Death, it is the critical function that bridges the public and private worlds, saving research ideas just as government funding wanes, by attracting private investment in startup companies. The diagram demands that innovation be thought of in terms of licensing. Without licensing, no innovation, no bridge, no way through the Valley of Death. Is this possible?

I think not. We have many instances of innovation taking place entirely without licensing. Licensing requires ownership or authority, and here, what is meant is patent licensing. The fuller expression of the linear model is not that basic research leads to applied research, which in turn leads to commercial development, but rather that basic research leads to inventions that lead to patents, that spark investor interest in funding more work to turn inventions into commercial products, especially ones that get sold and pay a royalty. That’s largely what “licensing” is about–securing a royalty in exchange for a grant of rights. If no royalty, then no grant. And that’s the flip side of “licensing”: if you won’t pay, then we have the authority to exclude you from practice. We will pro-actively keep these public inventions on the public side of the diagram. We will not let them enter the Valley, and we will ensure that it will be a Valley of Death.

Licensing is a huge problem. Without patents (and copyrights), there is no licensing.  With standards and platforms, licensing with patents becomes pro-forma. With open source and other approaches to public development, licensing may even be a way to preserve a commons rather than exclude anyone (except those that would disrupt the commons).  Thus, one of the real limitations of this diagram is that it cannot conceive of innovation taking place without ownership, without licensing, without money. Yet we have Steven Johnson’s book showing many ways in which innovation comes about through networked, non-market activity. Not the invention isolated from research and put in a glass jar of patenting, preserved with slurpy attorney juice, but the invention exchanged and pieced together by a number of hands, utterly unlicensed. You know, like how the internet developed, with common standards.

We now can deal with the right side of the diagram, the area labeled “private” and stocked with companies.  According to the diagram, companies do not participate in basic research or “translational research.” They are only targets of patent bombs lobbed into the Valley of Death, and they are what rescue ideas from this Valley, and are, in the end, the producers of innovation. Without commercial products, according to the diagram, all that research is wasted. The rhetoric of the diagram demands that somehow, inventions constrained by patents, relying on licensing, will receive investment and become products sold in the marketplace. This is the measure of the productivity of basic research.

One can see how the White House can push the idea that patents are a measure of basic research productivity. Yet, for a host of reasons, it’s just not this way in practice. The more patents that are filed by universities in an area, the more fragmented and impossible it is to navigate. Unless the licensing is highly coordinated (standards, pools, commons) or absurdly easy (public licenses, RAND licenses), more patents make licensing and innovation both impossible. Essentially a count of US patents, beyond a certain threshold, all but assures us that innovation will happen somewhere else. This is what has happened in nanotech and 3d printing, for instance. What else might the government choose to emphasize, promote lots of new patent work, and thus squash into the administrative sidewalk, where no amount of marketing or licensing, express or template or kindly, is going to save it.

One might imagine that even a cluster of patents held by one university would solve the problem. But if we look at the scores of patents held by Rice University in nanotech, or the string of patents held by MIT in 3d printing, one can see that these don’t consolidate the industry but rather isolate a package of patents (or fields of use) to a single company. That would not be so bad, but for the *licensing*. Such licenses add, as a standard convention, restrictions on the exploitation of the patents. Licensees cannot cross-license the patents, offer no-cost sublicenses, contribute them to a standard, or worse, let something go into the public domain or a commons.  Licensees cannot permit infringement, waive payment obligations, or offer technology at no charge. All these are things an owner of a patent could do, if the patents were owned. But the sort of licensing that a university wants to guide ideas through the Valley of Death is the sort that demands a commercial product, demands sales, restricts sublicensing, worries infringement, and forbids actions that a licensee might take to avoid having to pay, or reducing what ought to be paid. Doing so would be a material breach. Any good course at disreputable professional organizations devoted to inculcating the linear model in university technology management teaches licensing officers just how to do this.

The presence of a few patents in a broad public domain appears to provide the patent holder with an advantage. A few universities dealing patents will find a way to earn some cash. They do so, however, at the expense of the others, who play the happy patsies, donating ancillary technology that promotes the success of the few bits of proprietary invention that are out there. If, however, everyone is encouraged to play the same game, then the advantage of being the proprietary leader evaporates. Now everyone is sure they are not going to be the happy patsy, so everyone has their patents, on every little bit of whatever might come next. Now the patent is merely a tool that isolates each little bit while providing a cover story about how this little bit will form the basis of a new product or company or industry–not the little bit at the university next door or on the coast.

The problem for the poor companies is how to put this humpty-dumpty of new stuff together again. To license even one patent might take a year. If one needs ten to twenty just to be in the game, then it’s next to impossible. And keep in mind a typical inkjet printer might have more than 50 patents in play. Any kind of technology product of that sort is going to require multiple transactions.  Thus, the impetus for standards and cross-licensing in industry–pick the inventions that create something distinctive on the edge, but cede a lot of the middle to other companies in the same circumstances, and compete on other things, like efficiency, features, availability, and customer service. Universities, however, rarely license patents non-exclusively using public licenses, RAND techniques, or participate in standards. Universities don’t need cross-licenses, because they generally don’t manufacture or sell products. In that regard, they are non-operating entities.

Even where a university considers non-exclusive licensing (and some of the big biotech deals that universities did early on, like Cohen-Boyer, were non-exclusive), it demands a royalty, which apart from upfront fees and milestone payments, means a share of the sales action, and for that there has to be sales, reporting, and the availability of books to be audited, and penalties for under-reporting or paying late or other irregularities.

All very fine, in a particular way (it is “best practices” in the disreputable training courses), but the clincher is the problem of the royalty stack. If each university asks for a 2% royalty, and there are ten universities, then the company is paying out 20% of its sales price in royalties. This is no good if the margin on the product is, say, 5%. Every university on its own is being reasonable, but collectively, the deal is beyond stiff–it’s stiflingly, outrageously nuts. Thus, companies ask for an anti-stacking clause. Everyone offering a license has to agree to fit into some maximum total royalty, such as 4%. That might work in theory, except there’s always one university licensing office that thinks what it is providing is better than everyone else’s and therefore should have a premium, otherwise they won’t play. First in or last in, typically, trying to weasel out a deal based on the rest. The company, once it has committed to paying the 4%, doesn’t care, really. It just wants to get on with things. The delays, however, while various universities fuss it out may be sufficient to kill the project.  Worse if the universities propose an “inter-institutional agreement,” where some institutions cannot even agree on the governing law.

After 30 years, universities still have no way to manage a shared, distributed, asynchronous licensing environment. Nothing close to a library or commons or ad hoc standard that would allow companies to come in, sample stuff, mix and match, take one deal for multiple assets, pay one fee, and let the various owners of the assets work out the proportional shares.

We tried to do that once, here in Seattle, but it got shut down before it started by the technology transfer office, devoted to commercialization, that said the whole thing was incomprehensibly stupid. We were working toward something like this with GreenXchange, but that hasn’t found its legs yet, either. The Rosetta Commons is the best model I can think of, where we developed a way for university IP to enter a multi-organizational project, and allow companies to obtain a single license for assets in the commons, for commons work, and for anything else, deal with each organization independently for whatever rights they hold.

For industry, the idea that the government would promote lots more patenting at universities and then hope that licensing would save the day is something of a nightmare. Perhaps if universities posted each invention to a standard library from which companies could obtain rights without ever having to negotiate with the university owners. Of course, there are resources like this, such as Intellectual Ventures, which aggregates patents and provides just this sort  of service. But Intellectual Ventures is viewed with suspicion by many university licensing offices, even as they also accumulate many patents, but lack even the most minimal capability (or will) to make their patents available non-exclusively in bundles, without a lot of worry about separately valuing each and every one. Funny that.

We thus come to the intriguing title of the slide, “The Continuum of Innovation.”   What is this “continuum”? It would appear to be one that moves between public and private, without room for collaborations, networks, hand-offs, long noses, or commons. Where one might expect the most convergence of public and private interests, we find instead licensing and the Valley of Death occupying the center of attention. In one sense, the “continuum” proposes that after a lot of federal funding, private investment will step up and take over, perhaps when a whiff of profits is in the air, and government money will fall away proportional to the private investment. Makes no sense whatsoever. Folks don’t shut down their research because a company gets involved. They try to get even more funding! The government doesn’t pull out of basic research because there are applications, they fund even more. The government money chases the trends, the big splashes.

It may be, however, that the point of the diagram, or at least its title, is that there is no strong distinction between public and private. Things mix. Companies want good things in the community, too, and universities can be infected with profit bugs, hoping to find something that can be sold, and therefore first licensed to a friendly capitalist taken with the idea of a limited monopoly and happy to pay for it, if that’s all that it takes. If this is the case, that everything is a mixture of public and private, then we would expect that the middle of the diagram would be a kind of nirvana, a peak of collaboration where the public and private work together, consistently, for extended periods of time, combining funding, complementing resources, and blasting forward to help people live their lives, restore their health, and get new shoes when they are feeling down. Not a Valley of Death, or licensing. But commons, platforms, libraries, standards, sharing, reciprocity, giving back, and helping out. You know, all the fluffy stuff that “commercialization” folks disregard as foolish and impractical, and all but ban by policy and “best practices.”

Here, then, we have a diagram that encapsulates much of the reasons why university technology transfer is in such dire straits, why its aspirations are so at odds with its practices, and why its claimed successes come, largely, despite the processes it has put in place rather than as a result of those processes.  The diagram is an emblem that puts licensing at the center, identifies it as both the essential element and the bottleneck, and equates it with the “Valley of Death,” which it both is, and which it to some degree creates. The diagram’s assertion notwithstanding, there is no “continuum of innovation”, no symmetry, no general movement from basic to “translational” to “licensing.” This is all aspiration that has no sense of practices or the consequences of practices when multiplied across many universities, all working the same model, all prepared to be the beneficiary of a lucrative deal, and each on guard not to be the patsy in the bunch.

Once you know how to read such diagrams, they start to make a lot of sense, as emblems of the problems we face. Getting away from the mindset that would create such a diagram, or hand it to a university chancellor to present in a talk would be a great first step. I would think this would be a valuable task for the “science of science policy” folks to take on–review each diagram of the “innovation process” produced by university technology transfer folks as to whether the diagram has any substance to it. It is not that diagrams have no place in the discussion of innovation and technology transfer–but rather that the diagrams should reflect something substantive, that reflects practice, and on which future practice may be based.

 

 

 

 

 

This entry was posted in Commons, Innovation, Policy, Technology Transfer. Bookmark the permalink.