I answered a Quora question. Here it is.
Are there any freely available data resources to track technology transfer activity between universities and commercial businesses in the US?
Here’s my answer.
No. And that’s quite amazing.
Most universities publish an annual report on their technology transfer activities, but those reports don’t list inventions reported, and for each, what’s been patented, what’s been licensed, and with what scope (assigned, exclusive, non-exclusive), and what has happened after the license (nothing, research use, internal company uses, development with more patented inventions, commercial products, etc.).
Most universities also publish on the web a “technologies available for licensing” list, but they don’t list technologies no longer available for licensing—because they didn’t obtain patents or the patents have expired or the inventions have been listed as “inactive” or the invention has been licensed exclusively.
The Association of University Technology Managers does an annual survey of licensing activity. That sounds nice, but the survey collects mostly useless information. Inventions are conflated with non-inventions. Patented inventions are conflated with inventions and non-inventions that aren’t patented. Stuff is not sorted or traced through. So what’s reported as an invention in a given year is not related to what’s patented in that year or what’s licensed. One just counts: we received x reports of invention, filed y patents (across our whole portfolio), obtained z patents (again, across our whole portfolio), did zz licenses (across … ). Most of the transfer activity involves inventions (and other stuff) reported in prior years. One very active piece of software, for instance, could be licensed scores of times per year for a decade. And a single reported invention may result in four or five issued patents over a number of years, what with divisionals, continuations, and continuations in part.
The AUTM survey then gives an indication of bustle in a year, but provides no indication of structure of any university portfolio. Not % of inventions claimed. Not % of inventions claimed that then have patent applications filed. Not % claimed that are licensed. And especially not % licensed exclusively that result in commercial products available to the public on reasonable terms. Not $ spent that have been written off. Not number of inventions litigated for infringement, breach of contract, and the like. Not median time to exclusive license. Not % of inventions offered first for a non-exclusive license on FRAND terms. Not % of inventions licensed pro-forma as a consequence of a sponsored research agreement or consortium. One doesn’t see how many inventions are under *active* licenses that are reporting sales, and how many under licenses not reporting sales or no longer reporting sales. One does not see how many licenses have been terminated or why (disinterest, breach, limited term to start with, company goes out of business, and the like).
The AUTM survey is limited in other ways. Universities self-report, and nothing is audited for validity. AUTM encourages universities to estimate where they don’t keep good records. University faculty often collaborate, so inventions may be shared across two or more institutions. Each reports the invention as its own, same with patents, and licenses—so there’s double and triple counting of some activity. One cannot add up numbers of disclosed inventions reported by various universities, for instance, and get a true count. It will be high—by 10 to 20%, I’d guess.
And other than a public summary of largely meaningless information (because there’s no context), the university-specific data is behind a paywall. So, not free, even if not all that helpful.
There have been efforts to create marketplaces for licensing. An early effort, which I had some involvement with, was Flintbox. It was to some extent based on the practice we had developed at the University of Washington, and for a time was popular in Canada. Universities were encouraged to post their inventions there for ease of licensing, but it turned into a bigly assembled “technologies available for licensing” list rather than a quick way to obtain a license via a standard non-exclusive transaction on-line. iBridge was another effort, similar to Flintbox, with a similar outcome.
One problem in all this is that the Bayh-Dole Act, which controls much U.S. federal agency patent work, makes university reports of technology transfer activity involving inventions made in federal work secret, as well as the terms of those deals. Public universities also resist public records requests for such information. A usual reason given to withhold such information is that it would “chill” the universities’ efforts to license “technology” exclusively if they had to report on the terms of those licenses and whether anything had come of them. The universities are happy to tell “success” stories—and sometimes those stories are even true—but telling a handful of stories a year when a university is seeing 100 to 200 invention reports leads to confirmation biases rather than a clear understanding of what is going on.
And all of this deals only with the formal, institutional side of “technology transfer.” Much of what really goes on happens when students graduate or faculty or staff leave for industry jobs, or when faculty consult or faculty, students, or staff start their own companies and do their inventing there rather than at the university. Technology might also be “transferred” by means of publications, conference presentations, or crazy bouts of eating and drinking during or after a conference. We had one case where there was effective transfer catalyzed by a love affair. That didn’t make it into formal reports, however. There are also instances of refusal to transfer due to rivalries, personal politics and beliefs (one deal failed because the candidate company’s lead programmer was said to be a *creationist*).
In formal accounts of university technology transfer, taking out a patent position is identified as the crucial step. But in practice, taking that patent position works against technology transfer in all sorts of ways. The patent by default excludes all other practice—whether research use or internal use or for development. Early adopters and technologists cannot then just play with the invention without running afoul of infringement claims. If a university does not immediately declare a given invention available for license non-exclusively on *very* reasonable terms (free for all uses but for, say, commercial sale and even there for less than it costs to ask a lawyer to turn sideways and cough), then anyone who does not need or expect to get an exclusive license has reasons to avoid the invention—to design around it, to file patent applications to block its development, to exclude it from standards, to *not* adopt it.
Things get much worse if a number of universities have researchers working in the same area, which is the case when federal agencies decide to spread work around geographically. Then a bunch of inventions get made, get claimed each by host universities, and then each goes out to license exclusively (very few universities start with a default of not claiming ownership or licensing non-exclusively when they do). Now a common area of technology development has been fragmented into scores of IP globules. If even five or ten are needed to do anything meaningful, it just won’t happen. An HP ink jet printer—remember those?—involved 50 patents. A nanotech company I met with once bragged that it had managed to get 20 licenses from universities to get out in front of a new technology. That was their competitive advantage—they didn’t think any other company could possibly repeat their feat. Few companies can roll through twenty universities buying up exclusive licenses in order to gain freedom to practice. Even fewer of those that could care to do so. It’s easier, and cheaper, to infringe and wait for the lawsuits and let a judge set the royalty, if it comes to that, after challenging the validity of the patents, making deans howl as they lose company affiliations and donations of equipment, and the like.
University default exclusive licensing is the single greatest bane to federal support of university research. Look at graphene or carbon nanotubes. University patent claims arguably fragmented these emergent technologies and destroyed early adoption for two decades while AUTM reported lots of patents and licenses. Just that all that was mostly waste effort where it wasn’t actively counterproductive and mostly just deepened the fragmentation of the cumulative technology while rewarding a few speculators with venture funding and universities with licensing income (but mostly not from sales—but rather from having an early position in a patent pyramid scheme).
If one wants to get at the technology transfer from a given university, its patent portfolio is generally a poor place to look.
Even the money part of technology transfer is a problem. While universities often report how much they made in licensing for a given year, they rarely do so for each invention, each year. The University of California system annual report is distinctive in that it has for many years reported the top 25 inventions earning income for the year. One can from that gain some insight into the ageing of the portfolio and the fall off from the highest paying deals to lesser ones. Most universities, however, just report a big unaudited number, which could include reimbursements for legal costs, realized equity from startups, legal settlements, and a host of licensing fees only an IP bureaucrat could love—and generally earned royalties from the sale of products is buried under a pile of financial rubble.
Worse, a single high-earning invention might float a portfolio for two decades. A university might report, over 20 years, cumulative 3,000 inventions and 600 patents and 1,000 licenses in that time, and $200M in income, but 95% of the revenue might come from a single invention, or 5. You would never know it from an annual report of 150 inventions, 30 patents, 50 licenses, and $10M of licensing income. You might fool yourself into believing that those 150 inventions and 30 patents and 50 licenses must have something to do with that $10M of income. Really, often, not. It goes the other way. A university with $10M in licensing income has a budget then to spend money on more patenting.
New technology roils up. When conditions are ripe, any number of people in disparate places can come up with similar things, like apprentice magicians who have seen a trick performed and can work out their own ways of doing it. It’s not so much that the technology is “transferred” as it is that it springs up multiple places from common ground.
Other times, something that gets adopted one place gets adopted by others, too, and that forms a critical mass of people working with that technology. That critical mass is often *the* asset that matters—then there are network externalities, then skilled workers can move from one company to another, then there are multiple sources for new products, so conservative buyers are more likely to jump in. Then there are lots of reasons to invest, to develop, to adapt, to vary some more. There’s a time to collaborate, a time to compete, and even a time to defect on collaborators. It’s just that university administrators running patents are among the least likely to know when those times are.
Formal “systems” of technology transfer imagined and constructed by bureaucrats and represented by wheel diagrams with arrows and other fantasy depictions of process rarely produce anything of value. Where technology is “transferred” often it does so despite the presence of “licensing professionals” and their patents. That is, the “success” claimed by the patent folks is actually *less of a success* than it would have been had those licensing professionals not forced the technology to follow their prescribed system of invention reporting, patenting, and (generally) default attempts at exclusive licensing.
Bonus section for RE readers.
The Milkin Institute publishes a ranking of the “best” universities for technology transfer. That ranking relies on the AUTM survey (and the University of California annual report, to break out the UC campuses). This ranking uses patents, licenses, income, and startups. Missing–anything having to do with actually achieving the use of “transferred” technology. In short, patents remove research findings from the public domain and rapid uptake for further research, especially in industry labs. Licenses–if exclusive–deepen the isolation of an invention from the rest of the research, professional (in the case of methods, especially), and industrial communities. So patents and exclusive licenses are a measure of isolation, not innovation. Non-exclusive licenses, by contrast, can be a measure of “innovation”–but as far as adoption of something new goes, a license transaction (other than a public, royalty-free license) introduces a delay and an expense–still barriers to adoption rather than incentives. A university’s patent and license program in general delays, not accelerates, adoption of new things. Think of a license as an attempt to undo some of the harm caused by taking a patent position in the first place. Income–well, that has to do with a few companies willing to pay to get exclusive patent positions. It’s a form of speculation, if you will. If the company initially willing to pay exists to get purchased by new investors who aim to get the company purchased by some big company, then it’s still a pyramid scheme, a kind of futures market on the prospect of innovation rather than innovation itself.
Sure, if there are royalty-bearing sales of commercial product, then one might jump to the idea that the income tracks the importance of the associated patents–and from a money perspective that may well be true. But forcing someone to pay does not track innovation other than that the patent was broad enough that the licensee and others could not find a viable way to design around it. A royalty-free exclusive license–an assignment, if you will–might be a much more effective incentive for a company to adopt something new, and a royalty-free non-exclusive license offered immediately might prevent damage otherwise caused by patenting. Thus, while income shows the financial importance of a patent, it does not show that the university’s role is one of innovator. Rather, the university’s role is that of financial parasite. Some universities use their technology transfer programs to be really good financial parasites. A bureaucrat’s thumb in every new pie, “for the public good.” Still, that doesn’t make a university “more innovative” for working this way, all sticky thumbed.
Finally, Milkin uses university-reported startups. That’s all fine and good. But many university startups are paper companies–a paper incorporation of what otherwise is a university laboratory activity, now characterized as development rather than basic research. These startups are often not arm’s length–the university takes an equity position, university personnel serve in management positions, the university approves the conflicts of interest, and the university in effect licenses to itself. It’s difficult to see how such a startup represents “technology transfer.” It’s an administrative transfer to create a bean to serve a ranking.
The University of Utah and the University of Washington took paper startups to a new level of dishonesty. Utah claimed to start more than 20 companies a year–most were shell companies with a university address, no employees, no actual operations. They were just incorporation paperwork and a university-supplied web site. A subsequent audit found no economic impact. In effect, the university used the startups to co-opt private, state, and federal funding that would otherwise have gone to small companies and entrepreneurial startups.
The University of Washington did Utah one better and just lied about its startups in its AUTM reports. While it claimed to have doubled its startup rate, it in fact had doubled its expenditures and halved its startup counts. To make up for this problem, UW counted all sorts of things as startups–companies that were started years earlier were now startups, companies that were started elsewhere and showed up for a license were now UW startups, and companies that hadn’t become companies yet, too, got counted. It was a sham, but it got UW boosted to the top of uncritical, brain-fuzzed rankings. Since Milkin uses startups as a key measure of technology transfer, is it any wonder that Utah and Washington show up high on their rankings? But is sucking all the investment air from a region really a measure of a university’s technology transfer practice?
But hey, the Milkin rankings are free–and empty.
More sophisticated but just as stupid is the Reuter’s ranking of “innovative” universities. Reuters uses Clarivate’s patent and citation databases (Clarivate acquired the databases from Thomson Reuters). The Reuters rankings are mostly bunk, too, when it comes to “innovation” as they don’t track doing anything new and having others then adopt it too, but rather just show off the use of the data bases. The number of patents, or the percentage of patent applications that issue as patents has nothing directly to do with innovation. While a patent marks something new, useful, and non-obvious, it does not mark innovation. A patent may just as easily mark something excluded from adoption, and so is anti-innovation, and new things then flow around it, leaving it and its trail of patent toys irrelevant. Academic articles with “commercial entity” co-authors also hardly gets at innovation–such articles don’t show that the university is “innovative” or transferring its research outputs for use. But it all sounds good if you don’t think about it much at all.
It’s true that a number of patents citing a given patent indicates that given patent has an important position. But patents cite another patent to show *they are not that claimed invention* and *are not obvious in light of that claimed invention.* Patent citations then might show importance, but the patents citing that given patent are working away from that patent, not necessarily building on it. There’s no way to tell by merely counting the number of citations. Citation counts can be run up by one company filing scores of patent applications and extending those applications with divisionals, continuations, and continuations in part. Each cites the given patent. The citation count goes up. The citation count is a measure of the extent of a response to a given patent, but is not a measure of “innovation” represented by that given patent. It could be that all the citation activity shows that the industry is by-passing that patent and its white-knuckled university administration.
The Reuters rankings are also free–though bogus.