Innovating in the infrastructure

Here is an interesting take on innovation.  Bill Frezza argues that the STEM infrastructure is not itself practicing uniformly the best STEM.  For biotech, Frezza argues there’s still room for innovation by upgrading to the tech that’s current in other areas of tech, like semiconductors, or at least using approaches that have been used in these areas.

Rather than trying to take “new ideas” and stuff them through an aging infrastructure that’s gone stagnant (“technology transfer”), how about creating the conditions for new infrastructure?   The “problem” is not a dearth of new ideas, and certainly not making an infrastructure “more efficient”, such as (in the case of university IP) present assignments and no-negotiation exclusive licenses.   The problem is how to build new infrastructure around emerging opportunities, advantages, and insights–often recognized and acted upon by infrastructure outsiders.  These folks are *not* trying to “solve the problems” of the existing infrastructure, but rather they assemble the tools and practices they need to do what they recognize as possible.   Their job is not mitigation engineering.  As Neal Stephenson has quipped, the cream of our technology education does not need to find its employment in writing spam filters. 

A bunch of new ideas that are variations on the minor margins of the existing infrastructure, where there are “markets” and it is possible to review for “commercial potential” are all ideas of a generic class.  At some point the class itself becomes worn.  It doesn’t matter at that point how many patents are filed.  The more patents are filed, the more fragmentation, the more overhead, the more difficulty in working the existing infrastructure.

It would appear that there are two critical times for open innovation.  The first is when new ideas are emerging in practice and research.  There, the crush of institutional interest can be devastating, as it is with monopoly university technology transfer programs, seeking to institutionalize everything that can be owned.  A second time to use open innovation is when an infrastructure has matured, and has entered its consolidation, efficiency, and utility phase.  This second time is characterized by standards, generic products, and massive cross licensing after the expansion and iterative innovation has played out to a stable market economics.  Then, what’s the point of more exclusive patent holding, but to play the troll or the fool?  What’s the point of new university research feeding *that* dragon?

There are magical times in the deployment of a new technology.  We have seen that with semiconductor, software, biotech, the internet and web, mobile communications, social media.  University IP happened to get a boost of freedom from Bayh-Dole in parallel with the biotech bit.  Now, nearly two decades after the burst of innovation transformed into a mature new infrastructure, it is played out in that direction.   At the same time, university tech transfer programs, for all their “infrastructure,” have managed to ignore, bungle, or suppress a host of other areas of innovation expansion–internet, web, mobile, social, nanotech, synthetic biology, 3d printing.  University technology transfer offices didn’t generally change or expand their own infrastructure to respond.

“Technology transfer” offices at universities could be a primary source of new platforms (the forerunners of infrastructure).  How long will they persist in seeking to be the best butlers and maids to entrenched infrastructures, preserving their own entrenched infrastructure?   Presently there are plaintive calls to anyone who will listen, begging them to save the “system” of “technology transfer” that has been created, as if this infrastructure were to lose its monopoly, it would fail.  Yet, just the opposite is the case:  if the current system were to lose its monopoly, technology transfer would have the greatest chance to succeed.

This entry was posted in Innovation, Technology Transfer. Bookmark the permalink.