Let’s follow up on the fact that there’s no publicly available–free–data source to track university to industry technology transfer. There’s no non-free data source to track such transfer, either. You would think there would be.
To get at metrics, let’s distinguish three separate issues: management metrics, public metrics, and federal policy metrics.
First, there’s the metrics one uses to manage a university technology licensing operation. For that, one starts with the approach one is going to take. Open innovation follows different parameters than does relying on patent monopolies, and patent monopolies (with default exclusive licensing) behaves differently from pushing out startups for economic development. And for all that, focusing on patentable inventions is an altered dynamic from focusing on copyrights, data, and materials, and focusing on research services differs from focusing on proprietary positions. Add to that running a compulsory participation program–the university owns everything, even what it does not know it owns and even what it is wrong for it to claim–differs from running a voluntary program–present to us stuff you think it would be helpful for us to support and we will consider it.
In short, if you don’t know what you are doing, cannot decide among these approaches, then it makes perfect sense to report a set of abstractions like inventions reported (inventions and non-inventions), patent applications filed (of all sorts, count provisionals and then count full utility filings, and then count each divisional, continuation, continuation-in-part, reissue, PCT, and national phase filings as an application), patents issued (in any country), total licenses (anything over $1,000 a pop, say), and income (regardless of basis–patenting reimbursements, realized equity, license issue fees, license maintenance fees, license milestone fees, settlements, judgments, and the like). But these metrics are useless for management. They are more like bragging metrics, or metrics for people who don’t know what to look for, or metrics to make a case that a licensing office needs more funding (“look at all that potential!”).
Regardless of the focus, a management metric looks at budgets, activity, priorities, and outcomes of activity. Spending time to get a disclosure?–the disclosure had better be something that your office can do something with, or you are wasting your time. Spending money on patenting?–you better get patents with solid claims in areas where taking out an exclusive position matters. If getting to use will take more than twenty years, forget it. If the use will be obsolete before the patent issues in three years, forget it. If the claims are so weak that anyone can design around them, then forget it. If the claims have no obvious way to determine infringement, forget it. If thirty other things you don’t control also have to go true, and most of them are hung up in others’ proprietary rights or miracles have to take place, think hard about what your tiny bit of white knuckled hanging on patent is going to contribute. You may attract speculators for a troll war, but you won’t transfer technology, except to speculators. “Let’s blow $15,000 on a patent just in case” is not a sound university IP management decision. Working up materials to publicize an invention?–you better get a line of licensees. Spending time on technology available for licensing write ups is waste time if no one shows up. Signing deals?–they had better result in use of the invention, good press, and more industry and investor interest in working with your office.
Most of all–what senior university administrators always want to see–is the money. How much did we get back for what we spent on you, licensing office? If you can’t show senior administrators money, and lots of it, then your university licensing office ass is in big trouble. These days senior university administrators seem fixated on the idea that IP licensing does make money, so if your office isn’t making money, the director gets fired, the office gets reorganized around a new happy theme, gets a new name to go along with the new theme, and a new director gets hired, typically one promising to make a lot of money. It’s just that the time frame to make big money is often more than a decade, so any smart director moves on to a new position in five to seven years if they haven’t got lucky with a big deal immediately. Never been a licensing director fired for making money when the public interest demanded subsidy. I’ve been hauled in for questioning when I’ve chosen to subsidize rather than try to make money–as with Caring for a Loved One with AIDS. But that’s just because administrators think every IP position should be a money-making position, and that public interest is university money-making from IP positions. They won’t out and say this in public, of course.
But that’s just the internal metrics. That’s not what gets reported to AUTM for its licensing survey. The AUTM licensing survey is not about technology transfer. It does not track outcomes. The AUTM survey is not about innovation. It does not track adoption by an established order. The AUTM survey is not about Bayh-Dole. It does not even track subject inventions or any of the core metrics proposed by Bayh-Dole, such as practical application, free competition, or American jobs based in the manufacturing of products based on subject inventions.
For the AUTM survey, you have to keep a second set of metrics–you have to count how many invention reports you get, as if more is better. You count the number of patents, not whether they are any good for their intended purpose (reminder: 1) making money, 2) attracting investment that otherwise wouldn’t be made, 3) creating a commons that pools resources). You count the number of licenses without regard for which inventions are being licensed and which are just sitting. One invention licensed 100 times and 99 inventions never licensed looks like 100 inventions and 100 licenses to AUTM. AUTM wants to see licensing income, but doesn’t ask what it cost to make that income happen. So for internal management purposes, AUTM metrics are worthless. We never looked at the AUTM licensing survey numbers to see how we were doing. If we looked at all, it was to see what uninformed people who look at AUTM statistics might think about how we were doing–academics and policy people, mostly.
Then there’s the federal policy metrics. There, what matters is that the objectives of the Bayh-Dole Act are being achieved. These metrics don’t care about private costs or activity. For each subject invention claimed by a university or federal agency, has there been practical application–that is, use of the invention with benefits available to the public on reasonable terms. And, secondarily, have you got maximum participation of small companies in federally supported research and development? How’s collaboration between your university and industry going, with regard to subject inventions? How about promotion of free competition and enterprise? American manufacturing and jobs?
For the federal policy metrics, things are simple. For each subject invention, (1) what is it?(2) what are the date of disclosure to the university and (3) the date of first practical application? (4) Who has achieved this practical application? That is, first commercial sale on reasonable terms, or first use on reasonable terms, or first public availability on reasonable terms. None of these three bits of information can be a trade secret or privileged. The claimed invention gets published when the patent application gets published, so that’s not an issue. For there to be practical application, someone has to offer something to the public on public terms. The someone has to be known. The terms have to be known. These are not sensitive items which if revealed would sour the climate for practical application. They are the essence of practical application.
From this information we can see immediately what percentage of a university’s claimed subject inventions have achieved practical application, and in what time period. We can see which ones have benefited from a patent position and which ones have not. We can see how many inventions have been withheld from public access.
From a federal policy position, there’s nothing about whether the universities make money from their efforts. There’s even an argument that universities, given their public mission, ought to contribute to the transfer effort in the public interest–they should chip in along side federal agencies to provide new inventions to industry. Bluntly, from a federal invention policy perspective, universities have no mandate to make money on their transfer efforts. They could spend money–lose it–and if in doing so achieved widespread practical application, all is good with the federal policy.
To be clear about all this–the federal policy issues around invention are nearly skew from technology transfer generally and have nothing to do with the AUTM licensing survey metrics. The federal policy issue is entirely whether nonprofit held exclusive patent rights make a net positive contribution to public benefit from federally supported research and development. If universities primarily grant non-exclusive licenses, then there’s no point in having Bayh-Dole and all its needless administrative overhead. If they take out patent positions that never result in practical application, then again, the federal policy fails. If they take out patent positions to extract money from industry but the patent positions themselves are not necessary to attract investment that otherwise would not be made while preventing any others from using the inventions, again the federal policy fails. If stuff does get developed into commercial products but is not made available to the public on reasonable terms–including reasonable pricing, not patent monopoly pricing but rather pricing as if there were competition or pricing as if the public could make and use the invention on their own and had a choice between DIY and buying a commercial product–then again, the federal policy fails.
Federal policy does not care about inventions, patents, licensing, or money. It monitors these things to make a gesture towards regulating them. But they are not a policy metric. Invention use is a metric. Practical application is a metric–use with public benefits on reasonable terms. Participation by small companies is a metric. Collaboration is a metric. Free competition is a metric. American manufacture is a metric. American jobs producing product based on subject inventions is a metric.
A simple table is all that’s needed
Invention Brief description Date of disclosure Date of Practical Application Applier
0001 New battery 2015-04-01 2019-11-25 Tesla
0002 Covid vaccine 2020-05-31
And so on. We can fill in patent information, nature of the licensing, amount spent to develop commercial product (if commercial product is the thing), and the like, but those are not core considerations. What matters is that there’s practical application, and that there’s practical application throughout the portfolio of patented subject inventions, and that that practical application happens expeditiously and is not delayed for years and years through failure to license, failure of diligence, licensing to incapable companies, licensing to patent speculators working a pyramid scheme to sell to other patent speculators, and the like.