The Kauffman Foundation (reminder: co-funds RTEI but has nothing to do with this blog or my opinions) suggested in a recent proposal that university licensing practice was “sub-optimal.” This has gotten the rile up in the lost territorial alley dog that AUTM has become. So there will be a debate on this hosted by LES in a few days.
I’m going to take it another way. The present approach universities are taking to research enterprise *is optimal*. That is, it cannot get any better than it is doing now, which is chronically, intractably stuck with some really lousy fixations, management, and metrics. It has reached its peak. It has defined the limits of the model it has standardized on. It has spun all the spin it has got going for it. It has fired all of its guns at once and exploded into space. It is as good as it is going to get. (And that, of course, is the position one has to be taking to say that the model is *not sub-optimal*, but we don’t ask AUTM to be logical on this stuff, because well politics isn’t logical, and this is politics).
Top performers. Instance. Stanford numbers: 6400 inventions in 36 years, only 20% (1280) licensed. Of those, 90% (1150) recovered patent costs, 18% (234) earned $100K+ to <$1m cumulative (less than $50K a year–license maintenance fees, not royalties), and 4% (53) have earned $1m+ cumulative. That’s 4.5% (234 + 53) of disclosed inventions that have generated sufficient income to be noticeable as possibly in commercial use under license. Really only 53 / 6400–fewer than 1%! (Source, p. 22). One or two a decade. What the program done is remarkable, optimal, totally respectable. I have nothing but good things to say about Stanford’s program. It has been a leader, the people are marvelous and committed. It is a heck of a program. I say, if you are in the heart of a major industrial cluster riding the development of a world class venture community with at least one other world class university within 50 miles and a great state university like SJSU pumping out EEs faster than a BP oil well, then you can do no better than study Stanford, learn from Stanford, build a similar practice. Which is pretty much what MIT has done. But if you are in Rollo or Pullman or Alfred, you might have to do things differently–different operating model, different goals, different strategies. There’s no supporting organization to help with that.
Speaking of MIT, the PTO as of yesterday lists 3433 US patents issued to MIT. How many of these have been licensed so there is commercial product on the market? Bayh-Dole requires reports of utilization of inventions, but exempts the reports from FOIA disclosure. Does MIT publish utilization reports for its subject inventions? Does any university? No. We have been working on this problem at the Center for Nanotechnology in Society, an NSF funded center at UC Santa Barbara. We cannot validate claims to licensing efficiency because we cannot get at the data that would establish the claims. There’s no way to establish whether any particular program is doing well across all the inventions to which it elects to retain title. The information is withheld. There is no way to know. Luckily, the debate about optimality doesn’t depend entirely on information of this sort.
The PTO reports the University of California has about 7000 US patents. In the UC system, 25 inventions account for about 75% of the currently reported income of $99m. A little less than two thirds of the total revenue represents patents set to expire in the next three years. These big numbers, however, have next to nothing to do with the effectiveness of the licensing programs in placing research inventions for practical application, commercialization, public use, anything. All they do is swamp out an examination of the effectiveness of the entire portfolio of held patent rights.
Be straight: Bayh-Dole is implemented *for each research award* by the reapplication of the standard patent clause as a federal contract condition for that award, built into the framework of federal research policy for universities in OMB Circular A-110 (now after a few moves at 2 CFR 200). The mandate is to encourage the development of the subject inventions for which retention of title is elected *for that award*. It is not a law to build a portfolio that makes money on a power law distribution of a few winners and a long tail of losers dragging in the dirt–inventions still behind patent walls not being used, contrary to Bayh-Dole’s mandate with regard to how the patent system is to be used.
AUTM wants folks to look at the winners and imagine that everyone could be a winner if only: faculty were trained to be drones not independent thinkers (ethics rules), clueless imprudent people would throw lots of money at whatever the administrators impounded (gap funds), companies would transform themselves from self-interested to fawning money sprinklers (licensing impasses), and universities put more money into tech transfer programs (chronically underfunded–that part is true but turns on what to fund). It’s the optimal spin this model can make. It represents the best minds AUTM can throw at the problem.
There is no getting around the problem, however: Bayh-Dole mandates an agent status for each university decision to retain title. That agent is to stand in for the *government’s interest* as laid out in Bayh-Dole, not to stand in for the *university administration’s interest* in having patent rights to make money with. The proper question, for each award, each instantiation of the standard patent clause, is: what did you do with the inventions reported in this funding agreement? It has nothing to do with, Oh, look–we have been making like $40m in licensing revenue on an invention patented in 1996. That’s very nice for the program–especially in 1996. But that income has nothing to do with the optimality of the program in 2010, and has nothing to do with the disposition of any other inventions made in other funding agreements.
The disingenuous move by university TLOs is to state their program income as if it applies to their entire program. They want to leave an impression of success, of diligence, of optimism. Nothing wrong with these things. But these impressions don’t square with the metrics. Anyone within a program, one of the inventors of the very, very long tail of inventions that do not get licensed, or the few that get licensed for recovery of patent rights, doesn’t necessarily see the metrics as reported as impressive. They are just as likely to see these metrics as hypocritical, misleading, false. The glossier the annual report, the worse it gets.
Instead, do this.
1) List the federal research awards. For each award, list the inventions reported. That gives one a baseline of productivity of inventions per award, and from there one can ask what kinds of awards, in what areas, tend to be invention productive. No one has this information after 30 years.
2) For each of the reported inventions, identify those for which title was elected. Identify who took assignment (university, foundation, other). That would give a measure of selectivity for the university participation. Identify licenses granted, with date. This would give some sense of the time from invention report to license transaction.
3) Identify whether there has been practical application, a first commercial sale. These are key objectives of Bayh-Dole, so call them out.
4) List license income prior to first commercial sale. List expenses prior to first commercial sale. List royalty income following first commercial sale. List expenses following first commercial sale. This gives a measure of direct program costs and recoveries per Bayh-Dole’s requirements, plus an accounting of funds remaining after incidental expenses.
5) Account for remaining funds from each licensing activity beyond its direct costs. These funds can be used to support the licensing program generally as incidental indirect costs and payments to inventors of a share of royalties (which can be up to all the remaining funds–fancy that for getting inventor participation). Beyond that, account for how remaining funds are used for scientific research or education. Show it rather than assert it. Since the whole squabble over money has to do with the amounts beyond what’s needed to run the licensing program, what is all the fighting really about? Slush funds for administrators? A new building (is that really scientific research or education?)? Just what. If there’s nothing compelling in the use of remaining funds, the model itself is optimized for creating *utterly uncompelling resources* at some great pain to university-industry collaboration.
Do this for the whole portfolio. It’s not that big a deal. Make it public. Now let’s talk about optimality. Not aggregate income. Not public spin. How do the research teams involved, by reporting inventions funded by the government, for which title is retained by the university, meet Bayh-Dole objectives? Show that outcome is optimal for each award. Now sum over all optimal outcomes. The metrics are not number of TLO offices, or the number of inventions reported, or applications filed, or patents issued, or licenses granted, or money made, or even products being sold. The metrics are the number of awards reporting inventions claimed by the university that used the patent system to develop university-industry relationships, help small companies, create or retain US manufacturing jobs, lower costs of administration, protect the public from non-use and misuse, promote competition but protect future research and discovery. I don’t see in this list much of anything that AUTM reports. After 30 years [update: now, over 40 years], AUTM is not reporting *any* Bayh-Dole objectives. What AUTM reports is an estimate of the workload of an office in a given year.
Clearly, for the universities, Bayh-Dole is very important in getting them administrative control over inventions, but it is not worth a rat’s ass in documenting what universities actually do to meet the law’s objectives. Instead, they substitute their own metrics, and say it’s optimal. Boeufmerde!
Until universities start reporting their activity honestly and plainly, there’s no point in debating the optimality of university TLO practice. It’s as good as it’s going to get. There’s nothing better to come. Fix it as in cat. Keep it from making more of itself. Move on.