Feynman made a famous talk on studying the small things of physics, arguing there was “plenty of the room at the bottom” for research. The same may be true for university research and technology transfer relative to markets, industry, and commercial potential. I will put it in the form of a story.
Some time in the late 1960s or early 1970s, research policy vocabulary switched from “research intensity” to “high tech research”. University research moved upmarket, to inquiry requiring expensive equipment, using exotic materials, in custom labs. Status was given to supercomputers and supercolliders. Policy aimed to position research upmarket from existing projects, providing a sort of premium science that could be done only with a substantial investment in technology. Military research explored neutron bombs while medicine sought to create artificial hearts (sort of), artificial ears (quite well) and artificial retinas (still working on that one). Engineering pursued nanotechnology and bioengineering. Physics worked on nuclear energy. Premium science, if we might call it that, comes from centers of excellence put together to justify expensive instrumentation, lab facilities, and computation technology, which in turn provides something of a monopoly position for research that can only be performed if you have it, and that in turn gives one a national competitive advantage over nations that lack that instrumentation, those lab facilities, that computation technology.
The great challenge is whether the inventions that arise in such national monopolistic labs actually matter to the lines of development taken by industry or community. This is the challenge to technology transfer that positions for premium science. One has to take inventions in high technology and construct a scenario in which these get used. This is the great claim of federal policy. It is a grand hope. It is a worthy hope.
Not only that, these inventions have to get used so that patents on them generate licensing income, and that licensing income then is used as a proxy for success. In other places, I’ve worked through how it’s easy to slip from use of invention to patent arbitrage and assertion to generate money without doing much of any transfer or seeing research inventions actually used, and still make money, and still claim success.
The roadmaps for technology products tend to move upmarket—more advanced features, better cost engineering, premium price points. The move in US research aimed to support leadership positions. Universities put emphasis on fundamental research that was much more tightly bound to the technologies of inquiry. The change might be characterized as the move from the International Geophysical Year in 1957-58, which aimed to get scientists out observing, and the efforts of the 1960s to build the Saturn V rocket, to the emphasis on super computers, networks, biotech, and the like. Science transmogrified with the increase in government funding and a shift from international science to national technology development.
University technology transfer comes into this picture in the 1980s finding its source of inventions in high tech science–mostly biotech–and its job to move this premium science to premium industry positions. It can do this because there is a speculative investment bubble going in biotech at the same time, and technology transfer planners conflate the bubble with their patent licensing approach. It appears that they can make the approach systematic (for efficiency) and apply it to all fields (for scale) and implement it in policy (to be fair and consistent) and implement across all institutions (to make it standard). That’s pretty much what has happened. To buy into it means to accept that the little biotech bubble of 1980 to 1992, with a VC move into biotech from semiconductors and an IPO market, represents an instance of a general case of standardization rather than an instance of a local case of lucky adaptation.
University technology transfer offices seeking to take premium science into premium market positions with patent rights have had to deal with two key features of this policy of premium science. First, tech transfer must deal with the market leaders who are able to support their own research endeavors. The Microsofts, Ciscos, HPs, Intels, IBMs, and Xeroxs, and also the Johnson & Johnsons, Bristol Meyers Squibbs, and more recently the Amgens and Genentechs. These companies expect collaboration not trolling. University IP is supposed to be like a power up, not a tar pit or ghost. University technology transfer models have had a tough time with this interface. They truly don’t want to be a tar pit or ghost, but they also want industry leaders to pay for inventions in premium science as if these inventions are not in an existing roadmap, as if industry wouldn’t have known or gotten there eventually without the university inventions. But this is largely not the case. Most premium science inventions are simply in the halo of upmarket inventions that companies paw through routinely looking for something really distinctive, and on which they cross-license in bundles to focus competitive assets on other things, such as distribution channels, service, brand, and the like. So outside of a local set of biotech relationships, especially in 1980 – 1992, universities have had a rough time licensing premium science inventions to industry leaders.
Second, technology transfer offices, in part as they find it rough going licensing to industry leaders, have shifted toward start ups, especially those with venture capital investors. Some decades ago, venture capital meant mounting an effort to go public and define a new industry or revolutionize an existing one with a roll up play or a transformative technology. In the past decade, however, things have changed. VC funds now spend more time later in the life of companies, working a new product onto a mainstream technology roadmap, just a few product revision cycles ahead, to be enough of a nuisance or point of leverage that one can get acquired and cash out the investment. It may be that at that point, the technology that led the investment is abandoned.
Neither of these situations–dealing premium science to industry leaders, or shopping that science to venture backed startups– matches up well with the historical position of the university research foundation, which routinely acquired patent rights from independent inventors, as in the days of the growth of television, or the electrical industry, or automobiles. The foundation approach is a legacy treatment of research IP, and it would make sense for universities to try to adopt it when Bayh-Dole came into effect.
The great misfortune for everyone is that folks adopted it right when there was a speculative IP bubble in biotech and folks mistook that local circumstance, which ran for just over a decade, for the appropriateness of their operating model. Rather than attribute their licensing success to the matching of a research opportunity to a special state of a special market, the tech transfer officers believed they had found a generally effective strategy and sought to establish the strategy in IP policies and expand operations to include all other areas of research activity.
Thus, university technology transfer offices were remarkably ill prepared for the software speculative bubble that took place alongside the biotech bubble, the later internet investment bubble from 1995 to 2002, or the nanotech bubble, which universities inadvertently helped to wash away before it got established, and the energy and cleantech investment bubbles we are seeing form today. Instead, universities have for the most part stuck with biotech and biomedical, and with their operating model to license to industry leaders or startups. If there is a change at all it is a move to adopt start up efforts over conventional patent licensing, and there to seek out a venture capital model that has been largely not active in early stage technology investment for a decade. Open innovation, crowd sourcing, cross-licensing, and standards are not part of the routine operations of university technology transfer offices.
Which is odd. Since one would think that universities would have the most to gain from these models, and the least to lose for developing them. We find instead that university technology transfer offices are among the most conservative of all IP practices that touch on research matters. For offices publicly dedicated to innovation, they present as remarkably incapable of it in their own approaches to their work.
We find then, that upmarket directed high tech inventions generally wait for industry roadmaps, as a sort of powerup. That is, research inventions do not for the most part start something, but rather are there as needed as work on a roadmap sweeps by.
The challenge in either of these two approaches is that they start from high tech positions—that is, beyond the premium margin positioning of existing industry product. The idea is, the inventions with the greatest commercial potential will be upmarket, will require significant development, and will therefore benefit from a monopoly position afforded by an exclusive patent license. That is, to be successful, not only do high tech inventions have to be on a roadmap or capable of starting a premium roadmap, they also have to work against competitive pressures at the high end. In essence, they aim to compete with the existing best money positions the leading companies have.
The technology transfer rationale is simple: spend resources on the best inventions with the best chance of making the most money through patent licensing, and define that as success.
We might ask, are there other places to look for innovation, other than this premium position? For instance, Clayton Christensen in The Innovator’s Dilemma describes situations in which innovation comes from beneath, from products that perform less well than those at the premium position, but which carry other traits that are desired, and therefore are able to displace the premium products. These innovations are not the best technology with the best rights with the best margins. They do not present as beyond the cutting edge. Rather, they present as good enough, with the prospect of getting better. The netbook is a recent instance. But so, in its way, is Facebook. As technology, there is nothing to it. As a place for research, at the outset, there was none. Facebook did not benefit for government-funded high tech university research. It was low tech, ignored, created by students—dropout undergraduates no less—at the same institution that brought us Microsoft 30 years previously, also started by dropout undergraduates.
Eric von Hippel has set out how innovation in various industries varies in its sources along a value chain. Sometimes innovation happens with suppliers, sometimes with the manufacturer, sometimes with distributors and value added resellers, and sometimes downstream with users, post acquisition. It may then be striking to see that most university research supported by industry is with manufacturers, even where they are not historically where innovation in value chains has come about. One might point to Amazon, an on-line retailer, producing an early e-book reader, the Kindle, which has come to dominate a technology category and given position to technologies that advance displays suited to reading books. That is, moving to a premium point in technology may come about through other routes than research inventions and their patents.
We can then point to a class of innovation at the low end, not necessarily low technology, but innovation created by other than leading manufacturers, off of their premium roadmaps, in areas where exclusion by patent plays no particular role in adding value, but which may provide a slight advantage in getting something into play, relative to the interests of the leading manufacturers and their road maps. In essence, the patent plays a defensive role relative to premium positioning—while allowing others following a similar pathway to practice—even the market leaders, if they choose to create products that compete with their own upmarket offerings.
In the low end, or just good enough, area, technology transfer has a decidedly different role. Here, the selection is not for inventions that support patent licensing with commercial potential at premium positions, but rather those inventions that have the capability of working underneath and in different price models from the existing high end. In this area, one finds that while with reference to the high end products the work starts as low tech, one quickly may find that to improve performance, research is necessary—often basic research, off the beaten path, that presents challenges every bit as important to advancing the state of the art as those presented in high tech research settings. It’s just that it’s not seen as high tech, and often does not receive nearly the governmental or industry support.
Take for instance the Open 3d Printing project. It has focused on how to 3d print with simple materials. How to go back in time, picking up on techniques and knowledge that were abandoned years ago–in some cases centuries–but that become important once again in the context of 3d printing. Looking for simple materials does not put one on the ground for patentable inventions in new compositions of matter. It is decidedly low tech work, as far as premium science goes. And yet it approaches scientific and engineering questions that are genuine, challenging, and worthy of research intensity. It is just that no one in technology transfer can really afford to care because the project does not generate patentable inventions suggesting a premium upmarket monopoly. Instead, the market appears to be selling design files for interlocking love hearts for Valentine’s day, something that doesn’t look like much of a commercial licensing opportunity for patent officers.
Yet Open 3d Printing is positioned in the midst of a renaissance of local fabrication, akin to what happened when electricity changed the economy of scale for large factories, and when desktop computers allowed local software creation without dependency on mainframe computers. The innovation happening in the 3d printing practice community does not show up as a major market, and does not show up as premium science, and yet it appears to have tremendous potential to transform community, industry, and research roadmaps.
Can 3d printing allow us to print houses? Can we use 3d imaging to extract and print a digitally accurate reproduction of a fossil or archeological artifact without ever digging the thing out of the rock? Can we print multi-layered materials like nacre?
The thought then can be summarized that there needs to be technology transfer services operating at the low end, the just good enough to better efforts, and not only at or beyond the high end of existing markets. We need technology transfer where there are no markets at present, and may not be markets in the future. How patents and licensing and the like play out at the low end would appear to be remarkably different from their high end counterparts. It is a primary opportunity for universities to occupy this arena with policies and resources. If they don’t, then federal policy needs to be modified to give incentives to individuals, foundations, and companies to be active here. It just may be that the low end is a better driver of innovation, economic development, and new jobs than premium science and upscale, high margin, monopolistic products.
There is room at the bottom. That’s where a lot of the new and interesting initiatives in community live. That’s where a kind of expert inquiry is productive–much more like the inquiry of agricultural extension or research intensity than the inquiry of high tech premium research for premium market positions. This sort of engagement, a research-with-others that is both collaborative and competitive, is an important contributor to an overall policy of university innovation. In the room at the bottom, the university as corporation is not the start of each new thing, nor is an invention the key marker of opportunity, nor is the patent right the motivation for investment. The inventions are still there, the patents may be obtained, but the practices of innovation around them are dramatically different from that of the standard model of university licensing, the linear model of research to product, and the idea that science-based invention is the necessary start of community innovation.
One might say, the room at the bottom is where all the objectives–and the greatest potential–of Bayh-Dole, not just the narrow version of commercialization of high end products for lots of patent licensing income which rarely happens, are met.
Your analysis is an interesting way to think beyond the current “one size fits all” model of university tech transfer. It’s true that the rest of the world is changing its approach to IP management, licensing, and (not) patenting.
I spent the day at the first annual Open Hardware Summit, and Von Hippel was here, as was Chris Anderson (of Wired magazine and Long Tail fame). Open source software changed the industry and now hardware is next; open source hardware is basically telling in detail how you made what you made, from circuits to machine blueprints. The idea is that an inventor will open source his or her product blueprints, plus any accompanying software info.
An intriguing suggestion made by a conference speaker was to create a patent commons where blueprints of objects would be available and downloadable via a pre-negotiated click thru standard license. In the prenegotiated licnese, the inventor would have already agreed to terms around the 1) field of use 2) licensing payments and 3) geography.
Open source hardware is a fledgling movement and won’t shake up the patent system any time soon. However, whether or not you agree with the concept, there’s a lot to be learned from just thinking it thru. Change is hard inside university business units; nobody goes out of business if customers avoid you(there’s no bottom line) and stasis, not change, is a lot simpler to manage. I wish university tech transfer practitioners could some out and see some of the emerging models of licensing and product development. It’s great stuff and thinking outside your current paradigm is always a good thing to check the value of your current strategy.