Ouroboros Innovation Advocacy

SSTI just sent out a note about a new report by Louis G. Tornatzky and Elaine C. Redeout, “Innovation U 2.0 Reinventing University Roles in a Knowledge Economy.” After a brief read through it, I’m left puzzled. Perhaps this note in the Foreword by Scott Doron captures the sense (my bold):

Dr. Lou Tornatzky and his colleagues–fueled not by remuneration, but by an intense belief in the economic power of universities–should be commended for their dedication and quality product.

I have no reason to doubt the authors’ dedication. What I don’t find, however, in the descriptions of the twelve universities chosen to illustrate the economic power of universities is any account of the cost of the programs or their actual impact on the economy. I see the usual statistics of numbers of startups and amount of financing these may have received, but the report appears to have no interest in getting behind those numbers. Everything reported appears to be marketing copy produced by universities and taken at face value. I don’t see new products on the market, or what those new products actually have done to make a change in society. Yes, I’m aware of the product here and there, and yes, I’ve heard of PageRank, but that’s not what this report is proposing.

My sense is that the report is an effort to rally states and the federal government to continue to allocate money to universities to prop up the losses these universities–especially the ones with the larger research budgets, as represented by the IU2.0 report–appear to be running in the administration of their extramural research programs. Though most universities won’t come out with the numbers, it is clear from their representations to the federal government that their indirect cost rates are too low, perhaps as much as 20%. Now, of course, university administrators could be cooking their books and telling the feds a fabricated story just to see how much they can get, but I’m thinking even if they are, the indirect cost rates are way too low anyway. I hear that when the federal government negotiates an indirect cost rate with an industry contractor, that rate is north of 100%, not the 55% or so that universities get. And it’s worse, in general, for public universities, which for some reason appear to get worse rates than their private university counterparts.

The report repeats some flawed premises:

Historically, federal government support of University research has emphasized basic science. That approach derives from a linear model of the innovation process, as well as an assumption that “market-failure” explains why private R&D research has gravitated away from the early Bell Labs model, in which basic science could take years to have an impact on the bottom line.

I don’t see a linear model in Vannevar Bush. I don’t see evidence of “market-failure” in the industry labs–new ones have taken the place of the old ones. Private R&D may be as productive as ever. Just because much more federal money has been allocated so that the percentage of private R&D as a function of total dollars attributed to R&D has dropped says nothing about productivity.

The linear model derives from people trying to rationalize federal support for various forms of research. To arrange them in a row–basic, applied, development–and then call that an intended progression, is foolish. Further, if that were really the federal policy, then why is there no pathway of federal support from basic, to applied, to development? There isn’t. Because it was never in the design. Basic gets support, applied gets support, development gets support. Even with the SBIR program, the NSF cannot figure out why all those Phase 2 recipients don’t move to commercialization. Could it be that the reason to propose Phase 2 funding is to get the federal dollars, not to build a product? Could it be that companies would build products if the government started purchasing that product in volume? That used to be a good sign that research was paying off. You know, a Saturn V rocket with Tang and Space Food Sticks.

Given that the report is about “Innovation”–right there in the title–then one would think that a primary focus would be on what the universities have produced that is, well, a change in the established order. Instead, what the report appears to promote is the expansion of programs–at substantial state expense–premised on the claim of innovation. The fundamental observation, however, is that the economic development programs have produced next to nothing that could be called innovation. If Utah’s USTAR program is any indication, $93m of state money produced four companies, 13 employees, and apparently no new products worth mentioning. What have the other state-sponsored university programs accomplished?

No question, if the state of Utah puts $93m into building out University of Utah research infrastructure, spending that money will make an economic contribution, but the impact comes when results of research make a difference. That outcome apparently, for school after school, is utterly uninteresting to report. Yes, something here and there, but not anything like the billing of potential.

There are plenty of things putting state and federal money into university research infrastructure can accomplish: people get training, companies obtain assistance, construction companies get contracts, and investors get a betting parlor. These are decent things. But I’m not seeing the innovation being reported. I’m not seeing how the economy is changing as a result of the university-hosted innovation.

The IU2.0 report argues that “size doesn’t matter as much as one thinks.” It is hard to say what “one” might think, but there are a couple of ways to break this claim down. Clearly, an invention or discovery in basic research can come most anywhere–neither the size of the institution nor the size of the grant makes much difference. I have never seen a correlation between size and impact. A small research school such as University of Akron might with luck and capable folks inside and around the university deliver neat stuff to industry for development, without huge federal support. On the other hand, the University of Albany has received somewhere north of $400m in state support to develop a huge nanotech research and business complex. Clearly, the economic development piece is there, provided the corporate residents of the complex pay out in taxes (directly, indirectly) enough to recoup that $400m+ in state expenditure. I’m not sure, however, that anyone is even trying to keep an accounting to find out whether the payout comes or not.

The report argues for an “organizational systems” approach to university innovation:

…we are convinced that a high performing Innovation U is more than a collection of well-intentioned incentives and practices, but is a product of a well-designed, led, and implemented organizational system.…While individual practices can and do matter, we believe real impacts happen when all (or at least most) of the parts of the university’s organization subsystems are tuned to work synchronously toward a common objective–promotion of technological innovation and delivery of value to society.

Perhaps a reason the authors are reduced to making a testament to belief is that, as Tertullian had it, “I believe because it is absurd,” or “I must believe, because if I had good data on that, I would simply know, and would not have to believe.” It may well be that the authors believe that a university administration, in a form of benevolent holistic (that is, totalitarian) system can somehow plan and execute on “technological innovation.” Just like that. I could linger over Friedrich Hayek’s critique of central state planning, especially when it comes to innovation (the critique is devastating to the position taken by the IU2.0 report’s assertion).

Instead, I will hammer away directly. Look at the problems in the statement above: a vision of a university administration taking control of the entire university project, and turning it, somehow, into a process, “tuned” just so, that will burp out “technological innovation.” Yet, after thirty years of building these claims, there is next to nothing to show for the one trillion or so dollars of federal and state “investment” in university research. Plenty of folks trained. That’s good. Plenty of buildings built. Can’t hurt. Lots of stuff published. Good reading for historians of science. But the stuff that really has made change come about–stuff like the digital computer, the internet, or warfarin, or gene splicing–had nothing to do with a holistic university administration tuning all the “subsystems” toward innovation. Those big innovations came about by motivated and fortunate people working with administrative forbearance, gaining assistance from industry, from colleagues, from random places–a farmer with a dead cow and a bucket of blood arriving on a winter day, a navy recruit swallowing rat poison, a president with a heart attack.

One does not build systems to produce innovation using administrative controls. Innovation challenges the established order. It’s a dream world to think that state economic planners can with planning tune university institutions, or any institutions, to technological innovation. But you will notice that the report, after all the buildup, does not direct the tuning of subsystems to innovation–no, the report directs the tuning to the promotion of technological innovation. Again, promotion is not production. I have no idea what is meant by “value” to society, given that the only folks that appear to be consistently making money from the “universities in the knowledge economy” game are brokers of state and federal dollars pitched to promote whatever it is that is trendy to promote. Researchers get careers, economic planners get careers, technology transfer officers get careers, patent attorneys get careers with vacations, compliance officers have careers, and construction companies and Wall Street bankers get careers, and the bankers probably own the vacation resorts where the attorneys book rooms. I just don’t see the production of innovation from this wonderful engine of promotion. It’s like the Colorado River of funding–it makes some farms green along the way, but the water is pumped out before the river makes it to the ocean.

I read the IU2.0 report as a rather transparent hymn to state control of research, with an unblinking-never-a-doubt (“we are convinced”) apology for the organization-person, fueled with public funds, to promote something that has not produced much at all from the promotion of its production. Universities do host creative folks, and those folks do creative things, and those things in all sorts of ways do contribute to the shape of our lives and our economies. That bit appears true, and I like a world in which that bit is true. But I don’t see that promoting the production of innovation, or tuning the university to that promotion and from all other sorts of activities makes any sense at all, has any merit, has any data to demonstrate its worth, and if anything, the diversion of money and administrative institutions to promoting technological innovation, or any innovation whatsoever, is a sure way to destroy innovation. Call someplace paradise, kiss it good-bye.

Federal funding, especially lots of it, appears to shift interest from private initiatives to whatever it is that the government is induced to support. What’s the point of a small company pushing a $30K research effort when the government will throw $30m at the area if it looks good? Why even try to do something when the government can swamp it out with a whim, hand the next round of development to university administrators, who license to their pet startups, and wait for speculative investors take the bait or, that failing, a chance to sue industry for thinking to actually use whatever might have failed the startup.

There was a time when government funding was considered dangerous, allowing politics to gain a foothold in the laboratory. Stories were told of Lysenkoism, of the legacy of governments using science to push planned but socially disruptive and ultimately failed economies. Leaders worried that science should be free of political ties. Philosophers of science argued the separation of science and state. Even recently, a thought leader such as Michael Crow can wonder if the federal handouts have damaged university self-directed curiosity. Even if one is a big fan of federal funding for university research, surely there is yet room for some reserve, even some concern for how much of research geography should be bought out by federal and state funding thrown at universities to fuel a continuous expansion of “research as an industry.” Given the dragnet of patent policies, sponsored research offices, technology transfer officers, lawyers, compliance officials, and auditors, if there were an outflow of innovation as proposed, the income from the licensing combined with the cash donations from grateful entrepreneurs would be more than enough to fund all the research our society could suck through its collective straw.

The reality is bitter: there is no such productivity, it is not just around the corner any day now, nor are private investors induced to throw money at university startups providing “value to society”–though I understand that if one has just flipped a university startup company to the next round of investors, it must feel pretty darn good to feel that fat wad of bills in one’s metaphorical pocket. That may be value, but I have a difficult time construing a few lucky (or manipulative) speculators as “society.”

At least the authors admit there’s no connection to be made:

A second issue relates to whether the data collection methods we chose to use–basically qualitatively and quantitatively informed descriptive case studies–will allow us to confidently assert a causal relationship between the structures, policies and practices we highlight and the outcomes achieved by these schools. The answer is: we cannot make such an assertion.

The authors give as their reason methodological concerns. There is a deeper problem that the authors do not appear willing to acknowledge: what they have assembled is not “data”; what they mean by “qualitatively and quantitatively informed” case studies is little more than epideictic rhetoric that uses words and numbers to praise institutions for their efforts, for the purpose of motivating even more funding for such efforts.

A few paragraphs prior, the authors were “convinced” that a “high-performing Innovation U” has a wonderfully planned and organized system. And yet, here we have it that there’s no causal connection between that system and outputs. It truly is a matter of belief! But worse, there does not appear to be any concern for the truth: the data would hurt. Much better to simply praise the idea of promoting innovation.

In Fooled by Randomness, Nicholas Nassim Taleb asks us to imagine the case of 10,000 incompetent traders working a random system with only a 45% chance of success. After five years, there are still 184 that have done remarkably well–turning a profit: “a population entirely composed of bad managers will produce a small number of great track records” (154). There is no point in asking what these 184 managers have in common that make them great. Whatever they have in common is not related to the causality of their success. Whatever they have in common, they have in common as well in similar proportions with the 98% of comparable managers that did not turn a profit. The cutting point Taleb makes is this: “Lucky fools do not bear the slightest suspicion that they may be lucky fools–by definition, they do not know that they belong to such a category” (18).

One turns to 200 or so research universities and nonprofit institutions that might participate in some form of technological innovation activity. Finding 12 or so that have some happy stories to tell is fine. But without a causal mechanism, how is this situation any different from a random one? What does it matter if the administration is organized or not, focused on innovation or instruction or on a new football stadium? It might feel good to demand all systems be directed at “innovation” or rather the promotion of innovation, but it is an empty feeling.

If one runs the numbers on cost and possible outcomes–financial ones for a university–if there’s not, say, at least $200m of research coming in, then there is absolutely no administrative purpose to trying to construct a system or process to deal with innovation. And by the time one is at $200m of mostly federal funding, one is losing maybe $20m, maybe $40m a year in hidden administrative and infrastructure costs–that is, it costs $240m to spend that $200m. Much of that added cost is for top-tier administrators to put together the show of processes and systems directed at promoting the idea of innovation, or delivering value to society–so the university can win more grants, expand its research footprint, and obtain more funding from state and federal sources to promote its efforts.

Let’s call it ouroboros innovation advocacy, the head biting the tail: university administrators and those that love them promote their promotion of innovation so the university can get more money to do research that the promoters can manage as part of promoting promotion. A never-ending cycle of promotion, expansion, management, and promotion of management to promote expansion. It all works so long as no one asks for outputs beyond the impact of spending the money arising from successful promotion of the promotion of innovation.

Although I would love to be in the midst of a renaissance, even one based on technological innovation if it must come to that, I do not see it happening. Instead, I see promotion without results; expansion without results; and administrations becoming petrified around the idea of process control of faculty, student, staff, and visitor creativity in an effort to somehow make the promotion come true. I have no doubt that many of the administrators who are ready to take everything they can from faculty and students believe that if they could just design the right system, you know, a well-designed organizational system, then new products would vent through every crack opened up by each subsystem doing its part, innovation would rise up like a sweet aroma, and money would pour back in return to the university, realizing everyone’s dreams.

Were it so. In my experience, being around hundreds and hundreds of research projects, and some amazing individuals and collaborations, federal funding counter-indicates successful transfer of technology. Yes, there were federally funded projects that have got stuff into circulation, even impressively. But overall, when it came to the effort to innovate, the folks with federal funding were more concerned about the next federal grant than they were about following something out to society. That’s not a knock on the talented folks who do get federal awards. Their accomplishment and ability to compete for federal grants is impressive. But maintaining those federal grants is essential–to keep a lab team together, to maintain the equipment and data, to hold onto any status, to be asked to present at conferences, or as John Boyd might have put it, to be someone. Federal funding invites people to be someone, and it is a lot of work to keep being someone. Doing itself is a different sort of thing. Doing is messy, it runs against the status quo, it runs against processes and systems built to optimize whatever future might be had by having next year look remarkably like this year, but just a bit better.

As you work through the IU2.0 report, consider the possibility that, for all the importance we may place on universities participating in “knowledge economy” and even for the value we might hold for technology as the bringer of hope to the struggling masses without the funds to buy and trade startups created at the instigation of administrators, the path of innovation does not lie with more process, more tuning, more federal funding for university administrators, or even for faculty research. Consider that the path to innovation may be where it once was, with individuals taking initiative, finding themselves bewildered on some strange shore of study, motivated by collaborations with others, where ideas and information and resources move, as if guided by an invisible hand, toward those with insight and the willingness to act on that insight. Those virtues need no administrative prompts, nor federal incentives, nor tuned programs spending millions to promote a focus on innovation. When even a handful of university administrators stand up for these virtues–for the freedom to pursue curiosity, for the freedom from the political pressure to “compete” for federal grants, for the freedom to step into the community, into practice, and draw ideas and inspiration there–then my hope and expectation is, we will also see the re-emergence of the potential that university-based folk have to “deliver value to society.”

 

 

This entry was posted in Innovation, Metrics, Policy, Technology Transfer. Bookmark the permalink.