Nine Points to Consider (with regard to AUTM’s licensing survey), 1-7

The Association of University Technology Managers, a front group for university licensing professionals, conducts an annual survey of the universities that its members work for. The survey asks for various metrics regarding inventions, patenting, licensing, startups, and revenue. The survey is used by various organizations and science policy researchers as primary data for assessing the effectiveness of university “technology transfer” and the Bayh-Dole Act. The Government Accountability Office prepared a report in 1998 that considered AUTM’s survey. The GAO had this to say:

The AUTM survey is limited in its application to Bayh-Dole R&D because the survey covers the activities involving inventions by the universities from all funding sources—not just federal. Also, the AUTM survey is limited as an evaluation device in that (1) the data are based on a survey sent to the organizations, (2) not all organizations respond, (3) respondents report data according to their own fiscal year, and (4) no independent  verification or validation of the data is provided.

We may summarize: the AUTM data is not validated, not normalized for reporting period, not complete, and doesn’t break out federally funded inventions from others. Let’s add to the GAO’s observations with nine points to consider. Seven here, two more to follow in the next article.

1. The AUTM survey data are not validated. What the GAO observed. There’s no quality check or audit to verify that the data reported are accurate. For instance, for years the University of Washington faked its start-up metrics (vastly exaggerating the number of startups each year), reported the fake numbers to AUTM, and then cites AUTM as the source for its continuing claims to be a startup powerhouse, as if AUTM had conducted its own independent research on the matter. But no, AUTM merely feeds back what universities report to it, uncritically, merrily. Thus, for starters, AUTM’s survey is only a reliable as university officials in reporting the requested information. AUTM encourages estimates when a university doesn’t have the records. Here’s the instructions for the 2016 AUTM survey:

The information provided to AUTM then may be a “best estimate”–make of that what you will. Without some way of marking verifiable figures rather than estimates, one has to work with a default expectation that AUTM figures are administrator estimates, not facts–in which case, there isn’t even anything to audit. How does one validate an estimate?

2. Universities duplicate the reporting of inventions, patents, and startups. A number of inventions made at universities are actually co-invented by researchers working for different universities. Each university requires the disclosure of each invention, however, and adds that invention disclosure to its annual totals, which most universities then dutifully report to AUTM. You can see that as universities reports inventions, a number of inventions with co-inventors at different universities will be counted multiple times. For any given university, this is not much of a big deal–round up those half or third inventions to whole invention–but as soon as someone starts to add up the inventions across universities to get some grand total, things go bad. There’s nothing in the AUTM survey reporting to guard against this double and even triple reporting of inventions. The survey data reported might be helpful in assessing a given university’s activities in its fiscal year, but the data are not helpful in assessing anything on a regional or national scale.

This same problem comes up for patents (co-inventors) and startups (which may anticipate licenses from multiple universities and thus be claimed by each university as “its” startups). Maybe you don’t care–perhaps the inflation is only 10% or 15%. Falsus in uno, falsus in omnibus. AUTM makes no attempt to present accurately what is going on. Big numbers matter. Call it the blindness of confirmation bias. Call it political bluffery. But the result is numbers that aren’t reliable. 

3. The definitions of invention and license are not directed to patentable inventions or to patents. Actually, there’s no definition of invention in the AUTM survey. The survey asks for “technologies.” Here’s the definition:

A technology, as defined by AUTM, is almost entirely skew from patentable invention: “A TECHNOLOGY is a single innovative idea, no matter how many patents, copyrights, or disclosures may be included in the TECHNOLOGY.” Thus, there could be no patents at all but still a “technology” can be counted and reported on a “disclosure.” This creates plenty of problems when it comes to counting anything. A technology might be an invention but it could also be software, or photographs, or whatever–pretty much anything that a university licensing office insists on handling (or at least reviewing).

Look then how things get muddled when it comes to disclosure of technologies:

Thus, a disclosure could be for a half-baked idea, not patentable, or a piece of software–and the next year, the same idea, now perhaps more fully baked, might be disclosed again, or a software program updated to a new release and re-disclosed. We are far removed from counting discrete patentable inventions. The same applies to licensing:

If the license is for software or a biological material but not in the form of an MTA, then report it, but if you think that reporting the license would “unreasonably skew” the data, then “(at manager’s discretion)” report all the licenses for a given technology as one license. Whulp, the University of Washington apparently decided recently to skew the data and report all its licenses for software and biological materials over $1,000. That got them to #1 in the country in, ahem, “licenses.” Given that it’s up to the reporting manager to decide what to count and how to count, there’s almost no information that can be derived from a report for any given university of the number of its licenses and options. Certainly, there’s nothing to be concluded about activity related specifically to patentable inventions.

4. The activity measures reported are unrelated to one another within any given annual reporting period. The number technologies disclosed has nothing to do with the number of patents that issue in that same reporting period, and almost nothing to do with patent applications filed, licenses granted, startups, or even the research funding received. The licensing survey asks for a “cash” accounting for an activity that is deeply based on “accrual” practices. A disclosure this year may set up a patent application next year or the year after; one or more patents may issue three or four or six years later depending on one games the patent prosecution; an invention might be licensed immediately–even before the invention is disclosed, if the license is granted up front in a sponsored research agreement; and startups might form at any time based on disclosures from years ago.

At best, the AUTM survey reports estimated activity, but does not show at all whether any of the estimated activities are related to one another. A university licensing office might have lots of activity–disclosures, patents, licenses, and even income–and still be totally ineffective and inconsequential. Most disclosures might not result in patents; most patents might not be licensed, but one or two “technologies” might be licensed many times while the others waste away; a couple of “big hit” licenses might produce most of the income, while the rest have no chance. For all that, an effective university licensing program might grant royalty-free licenses (which would fall under the $1,000 reporting limit) and therefore not appear to have much licensing or much income, and yet be doing a tremendous job.

Anyone who thinks to compare disclosures and patent applications and patents issuing, or licenses, or research funding for a given year will create fantasy ratios that are entirely artificial. It’s nonsense. Like what the Milken Institute reports by playing with AUTM’s survey estimates. GI/GO.

5. The AUTM survey doesn’t break out federally supported activity. AUTM does ask for how much funding is federal–but it doesn’t need much to ask for this information since it is available from the NSF’s surveys. AUTM does also ask for how many technologies had federal support–but again, technologies are not inventions. Worse, university administrators in general have no idea what constitutes a subject invention under the Bayh-Dole Act, and so make up their own “estimates” of what is and what isn’t a subject invention. The federal funding statements that appear in university patent applications may be entirely skew from the law–showing up when an invention is not a subject invention and failing to appear when an invention is a subject invention. Sure Bayh-Dole is a turd of a law, but university administrators apparently love it all the more. Again, university patents with federal support (generally) carry a federal funding statement, so one doesn’t have to rely on AUTM to collect this information from universities. However, what we might actually want to know is the licensing activity related to federally supported inventions that have indeed been patented. That’s not something AUTM appears to want the public or policy makers to know about, and any reports of utilization provided to federal agencies become, according to Bayh-Dole, federal secrets.

6. The AUTM survey doesn’t ask the questions that Bayh-Dole expects with regard to its statement of policy and objectives. Has a subject invention been used? have small businesses had maximal involvement in its development? Has the university used the patent system to promote free competition and enterprise? Has research been unduly encumbered? Has the invention or products based on the invention been manufactured in the United States with United States labor? What is the date of first commercial sale or use for each subject invention? Has each invention been used so that the benefits are available to the public on reasonable terms? What is the income received with respect to each subject invention–not just royalties, but all such income? AUTM’s survey doesn’t ask these questions.

It is as if AUTM does not want to know. Don’t ask–truly; don’t tell–why report things that would ruin the narrative? When AUTM turns to championing Bayh-Dole, it reports all activity as if it were Bayh-Dole activity. Inventions that were never subject inventions are reported as if they were the result of Bayh-Dole. Economic impact is estimated and reported as fact based on a range of university-industry collaborations, not the economic impact that arises from the beneficial use of inventions made with federal support. Given the way that AUTM accounts for economic impact, there would be virtually no difference in AUTM’s estimates if there were no actual beneficial use of inventions made with federal support. Yeah–the other activities–expenditure of federal research funds, expenditure of industry research funds, expenditure of investment funds (from all sources) in the development of companies (which also may have licensed technologies from universities)–swamp out the economic impact of the benefits of using federally supported inventions.

7. AUTM misleads readers of the survey with its “core questions.” Here are the core questions that AUTM says are critical to an assessment of university technology transfer:

None of these questions get at anything to do with actual “technology transfer.” Research expenditures as total dollars are not related to invention disclosure rates–yes, one can make correlations, but it is more statistical fantasy. One can get 10 to 30 disclosures a year per lab from any productive engineering lab if one wants, regardless of extramural funding. And one might not get any invention disclosures from a well funded nursing program or architecture program or communications program. And one may get no disclosures from operating clinical trials–as if it is sponsored research–clinical trials income may be substantial, and have nothing to do with funded research that is intended to discover or invent. Furthermore, some of the most important inventions don’t have any extramural funding. AUTM would have people believe that extramural funding and meaningful inventions are somehow critically related. They aren’t. The amount of funding does not cause invention, or even creative ideas. It goes the other way around–creative ideas may lead to funding, or the other way around, too, the prospect of funding may lead to less creative ideas. Better to get funded and do something half stupid or half already done than to risk not getting funded for something really creative.

Total license income, too, is meaningless. By way of example, the University of Washington rode income from its expression of polypeptides in yeast invention for over twenty years (the patent term was tolled during an interference). During that period, that income was 90% of UW’s total licensing income–while its patent licensing group was barely breaking even–for two decades–on all the other invention management it attempted. But for those twenty years, UW sure looked sweet on that critical measure of total licensing income. Otherwise, UW has had an underperforming, always in a dither licensing program with more marketing hype about itself than substance–and it could afford the marketing hype because it got its share of the polypeptide in yeast income, even though the invention was handled by the Washington Research Foundation, an outside invention management agent, and not the UW licensing office. One can say similar things about many other university licensing programs–UW here is not an isolated instance.

The number of new patent applications filed is a function of the money available to file patent applications and the lack of selectivity with which a university licensing office operates. There’s nothing about how many patent applications filed that has anything to do with technology transfer. The more applications filed, the more expensive the bluster and the more research results that go behind an administrative paywall. More is not at all better; more is not more productive; more is not more potential for public benefit.

This entry was posted in Bayh-Dole, Metrics, Policy and tagged , , , , . Bookmark the permalink.