The Association of University Technology Managers, a front group for university licensing professionals, conducts an annual survey of the universities that its members work for. The survey asks for various metrics regarding inventions, patenting, licensing, startups, and revenue. The survey is used by various organizations and science policy researchers as primary data for assessing the effectiveness of university “technology transfer” and the Bayh-Dole Act. The Government Accountability Office prepared a report in 1998 that considered AUTM’s survey. The GAO had this to say:
The AUTM survey is limited in its application to Bayh-Dole R&D because the survey covers the activities involving inventions by the universities from all funding sources—not just federal. Also, the AUTM survey is limited as an evaluation device in that (1) the data are based on a survey sent to the organizations, (2) not all organizations respond, (3) respondents report data according to their own fiscal year, and (4) no independent verification or validation of the data is provided.
We may summarize: the AUTM data is not validated, not normalized for reporting period, not complete, and doesn’t break out federally funded inventions from others. Let’s add to the GAO’s observations with nine points to consider. Seven here, two more to follow in the next article.
1. The AUTM survey data are not validated. What the GAO observed. There’s no quality check or audit to verify that the data reported are accurate. For instance, for years the University of Washington faked its start-up metrics (vastly exaggerating the number of startups each year), reported the fake numbers to AUTM, and then cites AUTM as the source for its continuing claims to be a startup powerhouse, as if AUTM had conducted its own independent research on the matter. But no, AUTM merely feeds back what universities report to it, uncritically, merrily. Thus, for starters, AUTM’s survey is only a reliable as university officials in reporting the requested information. AUTM encourages estimates when a university doesn’t have the records. Here’s the instructions for the 2016 AUTM survey:

The information provided to AUTM then may be a “best estimate”–make of that what you will. Without some way of marking verifiable figures rather than estimates, one has to work with a default expectation that AUTM figures are administrator estimates, not facts–in which case, there isn’t even anything to audit. How does one validate an estimate?
2. Universities duplicate the reporting of inventions, patents, and startups. A number of inventions made at universities are actually co-invented by researchers working for different universities. Each university requires the disclosure of each invention, however, and adds that invention disclosure to its annual totals, which most universities then dutifully report to AUTM. You can see that as universities reports inventions, a number of inventions with co-inventors at different universities will be counted multiple times. For any given university, this is not much of a big deal–round up those half or third inventions to whole invention–but as soon as someone starts to add up the inventions across universities to get some grand total, things go bad. There’s nothing in the AUTM survey reporting to guard against this double and even triple reporting of inventions. The survey data reported might be helpful in assessing a given university’s activities in its fiscal year, but the data are not helpful in assessing anything on a regional or national scale.
This same problem comes up for patents (co-inventors) and startups (which may anticipate licenses from multiple universities and thus be claimed by each university as “its” startups). Maybe you don’t care–perhaps the inflation is only 10% or 15%. Falsus in uno, falsus in omnibus. AUTM makes no attempt to present accurately what is going on. Big numbers matter. Call it the blindness of confirmation bias. Call it political bluffery. But the result is numbers that aren’t reliable. Continue reading →