Stats and the Darndest Things

AUTM stats were developed to make visible the practice load on a technology transfer office. As Benoit Godin has shown with the formation of the linear model (misconstruing coarse unrelated categories of basic, applied, and development research into a chronological sequence) so also have the AUTM stats been used to justify a “lesser linear model” of research $ to invention disclosure to patents filed and issued, to licenses and income. AUTM even arranges the stats this way to play to it, with research funding on the far left and income on the far right (and thereby deftly moving in an excel sweep from socialism to capitalism, with an inflection point somewhere near “legal expenditures”).

Folks have come to believe that things work chronologically this way, and dedicate the management design of their operations to prove it out. It apparently could not possibly be the case, say, that research money comes in *because* there was an invention three years previously, or that a lot of patents are filed *because* there was a prior license….

AUTM stats reveal very little about the actual organization of IP management. AUTM doesn’t report the dollars spent on TTO operations, or on marketing of inventions. AUTM doesn’t have a place to report number of lawsuits in progress against infringers and licensees, or break out settlements as a separate form of income, or the costs associated with disputes as distinct from filing patent applications. AUTM doesn’t ask for the total federal funding directly associated with reported inventions (even though the grant #s are required to be reported in patent applications and therefore the information is at hand). AUTM doesn’t ask for gift $ attributed to open release of findings or student placement associated with inventions reported in a lab. And yet we find in the scholarly literature folks dividing the research funding reported for a year with the number of disclosures received in that same year, as if these two numbers *are related*. Gosh, those academics do the darndest things!

If universities were looking at how they participate in innovation activities, they would cast further, and be less desirous of owning everything that appears implicated. The crucial lesson of externalities. Yet everything about innovation has to do with externalities. This is not true for IP, generally, without working at it. No wonder a fixation on IP leading to money in general can be expected long term to work against innovation. Or, more nearly, suppress many forms of innovation driven by externalities in favor of a very narrow subset driven initially by a proprietary position. A similar critique can be had of “commercialization” of research through proprietary product development. If commercialization comes about much more often as a result of externalities arisinig in research relationships, then the last thing one would want is for universities to reduce their externality-generating capability under the argument that they need to manage their IP “better”–that is, own more, file more patents, manage these for income.

Posted in Metrics | Comments Off on Stats and the Darndest Things

Innovation or Invention?

Most universities do not consider a general approach to innovation, or as I would put it, deployment of research-involved new capability (DRINC). The patent-and-license piece is just that, a piece. It’s good to focus on how that particular action comes about, and have some comparative figures (and for that, wouldn’t it be nice for AUTM to audit university self-reported figures, just to tease out whether the data put forth for comparison really is comparable?). But it would also be really helpful to have an accounting of how universities play a role in the broader life of innovation. For that, tacit knowledge, support of services, even purchase of commercial services in support of research all play a role. This goes beyond the usual economic development indicators of impact, which run toward expenditures in a region based on salaries and jobs created by making assumptions about royalties on licenses and profits used in industry.

If the claim is that universities anchor cluster development and clusters represent economic vitality, then it would make good management sense, quite apart from the politics of it, to have a way of accounting for all these activities, whether there’s an overt license from the university or not. Same with sponsored research. Why shouldn’t a consulting relationship be as important to the development and deployment of innovation as a patent license? The lurking question under this is, if innovation practice bridges private interests and university interests, and within that bridge some assets are owned by the university and other assets not owned by the university, and there are yet some important assets in the possession of university personnel and over which policy makes potential claims: should management extend to claim these assets as well, or leave them well enough alone? Another way: what happens if policy were to changed to put *more* assets into the hands of university personnel *without* making a claim for them? Isn’t it odd that almost all of the industry complaints about university research are caught up in issues of university ownership, not private ownership, of inventions?

I can’t imagine university research personnel expressing outrage at being given broader personal rights in what they have created. Maybe research personnel “get it” way better than the establishment administrators fixed on IP. Could it be that an IP policy (and associated conflict of interest policy) that was less invasive in marking out university “corporate” ownership in research would lead to stronger innovation activities? In that regard, much worse than personal conflict of interest in research would be a gross indifference to the potential impact of findings in the community in favor of getting ever more research dollars. If universities didn’t claim systematic ownership of IP, that responsibility would move from administrative procedures to research personnel whos proposals routine claim the potential benefits that will arise from conduct of the research. Claiming administrative responsibility for such claims would appear to feed into an institutional conflict of interest served up by making extramural research funding significant in academic promotion and tenure decisions. The practice claim then would not be to argue that technology transfer parameters should figure in promotion and tenure review, but rather, perhaps, that extramural research funding *should not*.

Posted in Metrics, Technology Transfer, Uncategorized | Comments Off on Innovation or Invention?

Bozonets and Innovation Practice

Maybe you already see where consideration of bozonets leads for university research asset management. Let’s take the draft bozonet framework and consider what may have happened with Bayh-Dole and university technology transfer.

Pre-BD, only a few universities operated “technology transfer” offices. After the law went into effect, there was a rapid expansion in forming offices, with many new people (including me) brought in to do work to manage IP. We came in with experience in a range of areas, but few directly in research-based innovation management, and often not with the four or five skills fluency needed to operate on all cylinders, and certainly not with the 60 subsystems for tech transfer in place and tested out. Patent attorneys knew how to get patents, and had the idea of how industry and independent inventors managed licensing for profit. Research administrators knew how to follow federal regulations and here were more, so they built out programs that paralleled their experience in sponsored research, framed by process, duty, and compliance. Investors and entrepreneurs knew about start ups and innovation and exits. Marketing managers brought how to position and sell products, even if research assets weren’t products, really. And scientists and engineers from academia and industry brought their technical training and research experience into the arena. A melting pot. Wonderful times, figuring things out.

My hypothesis is that around all this activity formed also a bozonet, sort of as a necessary shock wave. There was too much to learn too quickly with too many expectations and demands from too many different directions. It took a lot of moxy to survive in this space.

The bozonet arose to deal with the pain, the unknowing, the new vocabulary, and mostly, survival. The early bozonet patched things, spun things, and often botched it, sometimes succeeded big time, all the while preserving appearances, adding volume, and finding its own views surfacing more and more frequently as the way things really were. What if that’s what has happened, that along with the good effort we also created an early bozonet? What if the problems we have now are due to the persistence, size, and effect of this bozonet, in competition with early voices that suggested other directions and methods, and new voices that propose changes that are out of the mainstream?

The idea that I am exploring here is that in this window of rapid development, when we didn’t know as much as we’d like about how research and innovation are (and aren’t) connected, some things were put in place as patches for what wasn’t understood, and these patches got repeated to a lot of people struggling to learn how to do their new fangled jobs in technology transfer. The patches became the reality for the second generation of new hires. (That is, a generation in this field appears to be roughly 5 to 7 years). Everyone wanted to look good, most were optimistic. And it was hard to tell who was 80 percentile, who was 60 percentile, and who was 10 percentile but thought and acted as if they were 60 percentile. A regular Lake Wobegon situation, where everyone was above average.

As work developed policy writers came in to organize things, and everyone compared what they had to whatever everyone else had and copied what sounded good in a form of patchwriting, and academics came in to study how things worked in practice, reading the policies and using surveys to identify frequent responses from practitioners. How could policy writers actually work for direct experience and not get overwhelmed by the simple, engaging accounts propagated by the bozonet? How could the academics working this way separate bozonet from expert practice? For the academics, it all appears to be expert practice, perhaps, and subject to discussion on this assumption, without the need for putting a fine point on it. That is, if incompetents can’t tell their own degree of incompetence, why should non-practicing policy writers or academics doing studies be any different?

I have to be careful here, because I am making a general point about how social networks in a developing area may operate to capture, transmute, and carry inadequate practice and then have the capability to hold those inadequacies against new understandings, building them into policy, into training, and into descriptions of “best practice”. I’m not aiming to criticize anyone (though I do reserve the right to beat on bozonets for disrupting really great opportunities to get things done, and I do intend to consider how one goes after bozonet artifacts and practices and *innovates* past or despite the bozonet). In other contexts, roles are reversed and my vulnerability and self-esteem, say, would be on the line–ask me at your peril to have a professional opinion about wines, or what I think about that nifty minor scale I couldn’t recognize if I were paid to.

Let’s say that university technology transfer grew so quickly that it formed its dominant statements about how to do things in a context in which many of its practitioners did not have 10,000 hours of experience in university-based research innovation. What then? Let’s say since then, new folks coming into the practice inherit, generally, this way of doing things, supported apparently by experts who gave power point lectures and offered advice that regardless of the intent or even the trainers’ own experience was repeatedly patched and simplified by the new recruits, who came away with something rather different, and necessarily dumber, but held as important and sophisticated because no one could know any better if they didn’t know much at all to start with. The responsibility for getting it right tended to be off-loaded by reference to “expert authorities” in the form of policies or to general restatements, or to heuristics that show tidy pictures of processes leading from the lab, to patent, to license, to product, replete with explanations for why the process doesn’t work (lack of faculty training, budget shortfalls, funding gaps, and lack of innovation capacity in industry, together with nasty problems negotiating timely contracts). The deal was, you are not paid to figure it out for yourself, you are paid to make the model work, and it’s someone else’s problem if that model, or the policies, need to be adjusted.

What if the popular versions of research innovation are not much at all the way it works (and more, not the way it necessarily has to work), but rather is the way folks string together an explanation of what they expect to do out of the experiences they have had? That is, folks constructed by necessity–and perhaps quite ineptly relative to what actually is–a collective social reality that stands in for the dynamics of research based innovation. The social reality forms a grammar of survival for the bozonet, making the complex and unknown appear simple and organized, but in doing so, this also perpetuates the bozonet comfort zone against change, expertise, risk or a future unlike the present.

This is like being unable to separate opinion from data. If what “happens” is seen with the labels that have been constructed for it, then how can one tell there is “something else” that’s any deeper, complex, different from that? Doesn’t the claim for “something else” just look like opinion, or spin, or just plain weird? It’s like watching a quartet play and not accepting that people are also *counting time* even though no one can be seen doing that. In bozonet, one reasons from the properties of words to what must be happening. In bozonet, one assumes the properties one witnesses are the properties of what is happening, all the way down. In a bozonet, one assumes that logic follows history–you explain how to do the steps but have no account of why there are these steps. The world is simpler this way, but experience and expertise teach us again and again it is not the world in which we do our best work. Professional competence demands something more than simple survival.

The bozonet forms out of this soup of memory and social hazards and external circumstances. It offloads responsibility to heuristics and ubiquity, and seeks out what everyone does to shape its practice. In the bozonet, a different part of the brain lights up. Not the part that tries to figure out risk on its own, but the part that figures out how to avoid risk by playing inside what looks safe. It patches complexities with simplicities, and these in turn can preclude gaining the experience one would need to work independently of heuristics and expert pronouncements. This is ideally suited to folks tasked with managing policy, checking up on compliance, auditing services, reviewing transactions as to form, or for policy exceptions, and training in new folks to operate or participate in the “system”.

A bozonet collectively thinks much more highly of itself than is justified by its collective competence, and serves in this way as a social protection for folks wanting to survive each day and meet the expectations of the boss, if not also the job, with a feeling of self dignity and competence. The bozonet maintains memes rather than data, deals in opinion rather than experience. That is all it has to work with, and as far as it is concerned, that is the way the world is. perspective. You could practice the wrong stuff and not even know it!

The bozonet is a marker for a social network that preserves an early imprinting of how an area works, and holds present and expert practice accountable to this early standard, simplified and patched, rationalized into something almost but not quite entirely unlike tea. As such, the bozonet is unable to recognize actual experience and insight relative to opinion, and therefore is unable to imagine a future other than one that looks remarkably like the way the bozonet sees the here and now, which is largely by way of what other people in influential positions say it is, and those folks, largely, repeat what they have heard because no one really has the time these days for 10,000 hours of work before having a decent inclination of what ought to be done to cultivate research assets in support of innovation and community well being.

In assessing the effect of Bayh-Dole and the rapid rise of “technology transfer” in universities, one might say, a bozonet formed, and that this bozonet now is running in parallel with good practice, and competes with good practice, with diversification of practice, and with a deeper understanding of practice. It does this through its sheer size, ubiquity of artifacts, its choice of simple (non-)explanations to serve as patches, and its own inability to see and describe its own competence relative to the challenges and opportunities of the work.

In considering national innovation policy, or assessing the impact of Bayh-Dole, my mind turns toward the bozonet as a likely major player–a raft of expectations and social connections that can’t do anything better than hold things in place, concur in a general need to improve that involves mostly being a better bozonet, and use simplifying patch heuristics repeated over time to stand in for experience. It may be that many folks who actually practice in technology transfer offices in universities are not part of the presently extant bozonet. It may be that a bozonet may be a minority of practice, something of a well placed but incompetent oligarchy, but it persists as the mirror of practice, perpetuated in the academic literature on research innovation, accepted by management that hasn’t cared to poke at it. What if it is a bozonet that resists efforts to innovate in innovation management, and has prevented Bayh-Dole from inspiring the outcomes of research anticipated? If so, changing the law wouldn’t do a lot of good, would it?

Posted in Bozonet, History, Technology Transfer | 1 Comment

Bozonet: A draft

Why is innovation in innovation management so difficult? One explanation that has been developing in my mind involves what I call the “bozonet.” A bozonet is a largely inexpert social network incapable of distinguishing expertise from non-expertise. A bozonet is unsure of the future as a consequence of lacking experience with which to anticipate that future. A bozonet represents itself as standard practice based on ubiquity of like seeming practice, and is ready to claim positions of authority and prestige though ill suited for them, using organized appearances and plausible deniability to shift attention from what doesn’t work. A bozonet is keenly sensitive to social nuance, defends its own dignity, attacks its critics, depends on but does not necessarily acknowledge capable folks doing actual work, and works to prevent change that would fall outside its established comfort zone. When faced with making changes, a bozonet tends to make awful ones, but doesn’t know it. If a bozonet wins out, it becomes a norm, the way things just are. (One might pause here to think what this does to the idea of progress.)

We humans are susceptible to many social and cognitive faults, and we don’t leave them at the door when we go to work. A bozonet is something of a natural social form that frequents organizations, communities, and other arenas in which social networks form and manage civic life. I’ll try here to point to some reasons why a bozonet might form, and suggest that there may be good science to can help to explain what is going on.

This is a big topic and I intend here only to sketch it out. World Wide Words has a nice discussion of the origins and use of “bozo.”

A bozonet, then might be taken to be a group of such folks, interacting in some way, to achieve an effect beyond that of any single participant, and not realize they are being foolish. If people participate in social networks as one of those things we just do, then it also stands to reason that some of these networks also include folks that, with regard to particular areas of expertise and ability, know less and can do less than others. But it’s very human, and I’d say it’s something we all participate in, one way or another.

But there is more to it, and though there’s some of it intended to be toward jest, there’s also a bit that’s quite serious.

Consider this article, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”, by Justin Kruger and David Dunning. Across a number of tests, people clearly in the bottom 10 percentile thought themselves, on average, to be 60th percentile, while people who were very capable were also very capable of judging their skills accurately. This stands to reason. If you don’t know something well, it’s hard to know how much or little you do know. May as well assume there’s not all that much to know, and what there is can be learned pretty easily, and that’s that. In some ways, it’s a sign of being a really good judge of one’s own skills to be willing to be vulnerable and not appear better than one is. When it comes to professional conduct, however, where folks are relying on you to be as good as you claim, it’s another thing altogether.

It’s not just everyone that is willing appear as they are, given the chance for some spin. Think of the bits of misrepresentation that is reported to go on with resumes, and perhaps that go on much more frequently on the biography sections of social media sites.

Malcom Gladwell, in Outliers, (here’s a summary with comment) makes the point that it seems to take somewhere around 10,000 hours of contact time to gain mastery of an area of skill. In workplace terms, that’s like five years of practice. Gladwell is out to map explanations of success arising from odd combinations of circumstances, in an effort to challenge the genius and power versions that tend to circulate. What results, perhaps, is the idea that luck has structure, and so does, often, success. To develop skill in an area, it appears to take more than memorizing buzz words and knowing who’s who (which I call buzzhorpal). There has to be experience–real contact time with the world–underlying the work.

So far, so good. Let’s add a couple more things from neuroscience. One has to do with recent discoveries involving the relationship between our personal pasts and our ability to imagine our personal futures. Work by Daniel L Schacter and Donna Rose Addis (Harvard), for instance. What these studies are showing is that our declarative memory (memory of our past, and memory of facts) is tied up with the same neural systems that manage our imagination of the future. See The Memory Lab for more information.

One might say we imagine our pasts and remember our futures. That is, we construct from pieces of our experience the things that we frame up as personal intentions, or simulations, or possible alternatives, in our futures. In one report, a person who had suffered a brain injury was able to go to work and do his job perfectly well, but was unable to recall having done so in the past. Furthermore, he was unable to say what he was going to do the next day. The parts of the brain that handle the personal memories also handle a bunch of the future intending. For this, it is worth distinguishing what we are able to intend to do, as distinct from mocking up something that would look good in a powerpoint slide deck, full of buzzhorpal.

Schacter and Addis in “Concept Constructive memory: The ghosts of past and future,” an essay published in Nature (paywalled, sorry), suggest that memory function consisting of “piecing together bits of the past may be better suited to simulating future events than one that is a store of perfect records.” For our purposes in explaining the bozonet, it’s enough to note that if one doesn’t have sufficient experience, then substituting factoids one has picked up may not be adequate–but it’s possible that this won’t matter if those you are presenting to are in the same condition, since they won’t be able to tell the difference. In fact, sitting in meetings exchanging opinions may become a primary experience. If you don’t have the experience–directly, or vicariously through training, books, mentoring, practice–then how can you imagine any personal future you are willing to attempt?

Another aspect of memory and learning goes with this. Schacter, in Seven Sins of Memory (here’s a review), identifies a number of ways that we make mistakes with our memory. We all have these problems–forgetfulness, absent-mindedness, blocking stuff out, or being unable to forget something worth forgetting. One of these “sins” is particularly relevant here, for bozonet formation. Bias involves tending to make memories conform with current conditions. What we recall tends toward fitting in with our surroundings. Perhaps the Stockholm syndrome is somehow related, in which kidnapping victims tend to take on the cause of the kidnappers in an effort to win their release. If one’s professional standing depends on knowing something, and all you have is a heuristic model of how innovation works, such as the Linear Model, or the “technology transfer process”, then perhaps one feels a dependency on that model, because that’s all one has. If one hears repeatedly how technology transfer is supposed to work, then the texture of one’s own memories may tend toward conforming with this repetition. In learning, the right repetition matters.

I’m inclined to include under bias a desire for self-consistency of one’s expertise. Where we encounter complex stuff that doesn’t fit in, we tend to have difficulty accommodating it all at once, and substitute simplifying patches for things we don’t understand. The idea is, the patches make things consistent and hold the other things in place until one has the opportunity to work through the complexity in detail and figure it all out. But what if one never gets around to doing that? What if the patches sound good–because that’s what a simplifying patch does–so that an explanation with the patches sounds not only rational but even attractive. Simplicity over complexity. Works for scientific explanations (sometimes), but when a simplistic statement substitutes for the depth of what’s there, then it also may be a barrier to competence. How can one experience what one has come to believe doesn’t exist? Sadly, it appears to be incredibly easy to be incompetent and not know it, when it comes to complex areas of work. Our minds are set up to adapt for surviving, not necessarily for getting things right all the time.

Misremembering in learning environments also figures here. I worked for a time with an expert instructor in violin. I wasn’t learning violin–I’m a hopeless untaught guitar player–but was learning a bit about practicing. His point was that if a student was any good at an instrument, the teacher couldn’t afford to let the student practice alone. Making mistakes in practice would be a disaster, much worse than making a mistake in performance. Later, I realized why music teachers didn’t necessarily go to their students’ performances–their work was in the practicing not the play. All this makes one wonder about the role of “homework” in grade school–maybe that’s all bass ackwards, too, if the practicing is done in private and got wrong.

Just to make sure things are good and piled up on the stack, we need to include work done by neuroeconomist Greg Berns. I’m thinking especially of recent work that indicates that the presence of experts giving advice significantly changes the brain’s response to assessments of risk. Here’s a summary. Says Berns, “This study indicates that the brain relinquishes responsibility when a trusted authority provides expertise. The problem with this tendency is that it can work to a person’s detriment if the trusted source turns out to be incompetent or corrupt.” There’s a painful thought. If a bozonet sets up as expert, what hope is there for non-experts grappling with a ton of buzzhorpal?

Perhaps this is enough for now.

Posted in Bozonet, History, Technology Transfer | 2 Comments

Take Two Metrics and Call Me…

To evaluate a university’s commitment to supporting national innovation goals, here are two metrics that are not generally reported, but ought to be.

1) what is the university’s budget for innovation? In total dollars, and as a % of its extramural research.

2) what is the contribution from Bayh-Dole licensing to the budget for innovation? In total dollars and as a % of subject invention licensing income (SILI).

These metrics are not particularly useful as one-year snapshots (because both income and allocations can vary), but over five and ten year periods, it should be clear whether a university is investing in innovation or just giving big lip to it to get more state and federal funding for research.

Posted in Metrics, Technology Transfer | Comments Off on Take Two Metrics and Call Me…

Giving Lip

With regard to giving lip to university technology transfer work, perhaps we really do live in a “who cares?” administrative environment. You know, as in it’s all petty idealism to actually think that public statements should reflect what is happening rather than put a spin on what everyone expects is happening, or what we sincerely want to happen, or what we are hoping, hoping, hoping will accidentally happen even if we are clueless and inept about it. In this world, everyone expects the spin. It’s not a misrepresentation, just an optimistic overstatement that puts the best foot forward, to inspire people to join in and create such a world as we envision. We are too important to fail, too impressive to be doubted, too strong to be resisted. The world will conform to our desires. Public overconfidence is a virtue. At least, so the spin goes.

There are two directions one can take this. One is the Frankfurt direction–On Bullshit is a fine treatment of the theme. From this direction, folks aren’t lying. It’s worse than that. It’s impossible to lie if one doesn’t have a good idea of the truth, or a regard for the truth. Lying is actually a step up. With lip, that there might even be a truth is so effaced that the pragmatics of winning popular support and looking good are much more compelling.

In bozospeak, “nobody really knows anything here, so it’s best to put on a good face (or, be organized, or copy something important).” That is just like a bozonet, in a moment of what it takes to be honesty, to think the default is that no one knows much at all. Of course, it is true that the bozonet couldn’t tell expert practice from novice incompetence, so it is no wonder that a bozonet thinks it’s all a matter of opinions, and dedicates itself to making sure its opinions win out–that, after all, is its definition of survival.

More, in such a big spin society, it’s just not convenient to find anything out that could be thought of as truth–on the street, at least, through experience. Epiphany, like conversion, is not a virtue. A learning organization would be one that is inconsistent, confusing, unstable, and probably headed for a tragic end. Much better to keep things constant.

This leads to the second direction, one that we might ascribe to Todorov, that in doing history there aren’t primitive narratives. Todorov’s proposition is that there is no single, special “true” account of actions and intentions that is built only on the facts and nothing else, once all the fictions, inaccuracies, patches, and spins are removed. Perhaps it is turtles all the way down. And this may be quite the challenge to the idea of “truth” philosophically, and yet we work with an idea of truth that doesn’t go away. We can hold a truth, and still have some humility about its expression, expecting that expression to also carry artiface that we will be unsuccessful in stripping away.

But artiface isn’t spin, it’s scaffold. This by the way is also a problem for science, as we learn to see things with theory-laden expectations. Despite this bit of philosophy, the expert manager wants to know how things are going in the mechanics of it, not merely in the vocabulary of the desired outcomes. One finds this in performing arts–music–and game–football. The descriptions of outcomes used by coaches are shaped by a knowledge of how things actually work.

In big spin society, leadership has no time for such details. Details are complicated and ambiguous and hard to follow. They may be layer on layer of turtles. They may require, like, 10,000 hours of experience to make sense of. Better to level the playing field and work from power point summaries than to have someone out there on the ground who knows anything. In a lip and spin management world, it is much better to have battling opinions over unknowns and win the side with prestige and pundit rhetoric and a show of sincerity than by reasoning anything through on the street facts. In such a world, performance is an accident and expertise is the ability to position it as a feature or a tool or a success. There is logic to this. It’s not irrational.

There are metrics and then there are metrics. It’s not so much that we measure, but the smarts to know what to measure, and why. There’s a big difference between producing metrics that mean nothing or metrics to look good, and producing metrics that evidence what is happening, for better or worse. The latter, perhaps, is way too hard for most universities to contemplate. Wouldn’t you expect more from universities, where the search for knowledge is given such importance? Ah, yes, I see, that only applies to published scholarship, not to administration. That’s the challenge–one has to change the norms by which administration understands how things work rather than how they are maintained. As Kerr called it, a “politics of caution.” One might add, with an inclination for lip. That’s the root challenge for university research enterprise.

Posted in Metrics, Technology Transfer | Comments Off on Giving Lip

Follow the Money Tensions

Bayh-Dole sets up a three key tensions in how licensing money is allocated.

costs vs research (invest or slush?)
inventors vs administration (how much is shared?)
inventors vs other costs (when is it shared?)

Universities mobilize policy to deal with these tensions. These are worth considering.

B-D distinguishes administrative expenses from “remaining income”, which is to be used for “scientific research or education”. The first tension then is between operating your tech transfer program and putting money into research (or education–but who puts B-D licensing income into education?). What does it take to operate a tech transfer program, given that its assets mature over the course of 20 years?

To put the issue bluntly, do you slumlord tech transfer to enhance your research lifestyle? Or do you re-invest in tech transfer and build a robust program, even if doing so doesn’t immediately put more money into the research pocket? This is the problem for administrators in universities, and they usually solve this in favor of research, setting up one of the most effective critiques of tech transfer–and one of the worst reasoned in terms of national policy.

(Note, if the patent licensing program is entirely outside the university, and not controlled by the university, then this tension is very different, because the receiving organization has to make the choice between its costs and declaring a dividend to return to the university for disposition in research and education. But let’s stay on with tech transfer programs within universities.)

If you look at things this way, tech transfer is just another overhead on research funding. That’s wrong headed, but it’s a common view.

Buried within administrative expenses, however, are two other deep tensions that push on the first. In B-D, administrative expenses include sharing licensing income with the inventors. This creates a tension between the inventors’ share and research, on the one hand, and inventors’ share and other costs of technology transfer on the other. Inventors may see tech transfer costs as overhead on their share, and worse, university claims on their share as pure take-away. In this view, all income from a patent license is pure profit, except for the costs of the managing agent, which could be significantly reduced (so the reasoning goes) if the agent were smarter, more efficient, better positioned, and more prestigious.

University inventors are often faculty, so this allocation decision point often pits faculty against administration. The faculty argue that their share should be robust and tend to lay out the issue as another instance of faculty rights, pitting talent and borg, personal craft and corporate thinking. It’s an understandable set of positions. The private inventor up against the organization that serves as host and the interests of all those hosted non-inventors.

Universities typically manage this tension by setting up a “royalty sharing schedule” within their patent policies. The reasoning is to prevent disputes later if anything does get valuable. The downside is that these sharing policies are so massively hugely damaging in so many ways I’ll have to leave a discussion of it for another time.

Again, being coarse, the inventor tension comes down to who should get more, the inventors or the university? There tends to be little discussion of how much funding is needed to advance the national policy goals of B-D, or even what is needed to operate a robust technology transfer (or call it “public innovation”) effort.

It may be that at the heart of why university tech transfer is so uninteresting, how universities have handled these three tensions may make a lot of difference. If so, then the problem is not in the drafting of B-D, but in universities not being up to handle the difficult decisions that transform chasing research funding into a national program of innovation. Few universities have stepped up to that thinking, and the ones that do often start with a focus on regional economic development.

There is more to say on this topic: The variations on how universities handle licensing income. The lack of any real data on the costs of running a robust innovation program. How to position a tech transfer program so it is not overhead on research or a take- away from inventors–but rather the centerpiece of the public reason for funding research? And especially, how to position it so that it is not crassly addicted to “commercialization” as a means for making money, and not just that dabbed over with fluffy language about public benefit when the equation everyone knows is still “when we make money, that is public benefit”?

Beyond these, there is the question of just how universities are using their B-D licensing income, after paying (at least some of the) expenses incidental to managing subject inventions. What could possibly be so much more important in university research and education that it would lead administrators to push back on inventors’ shares and to all but ignore the full costs of participating in building a national innovation system? Something pretty darned important, I would think! So does it bother you how few annual reports there are that identify how a university’s remaining B-D licensing income has been used? It should if you think we could have a way better national approach to research leading to outputs that have societal significance.

Posted in Technology Transfer | Comments Off on Follow the Money Tensions

Linear Model

There is a lot of talk about the limitations of the “linear model” of innovation. Here’s a good paper on the topic by Benoit Godin. “It is rather a theoretical construction of industrialists, consultants and business schools, seconded by economists.” And I would add, embedded in all sorts of useless ways in US federal contracting such as the FARs and export control law.

Posted in Technology Transfer | Comments Off on Linear Model

Technology Lists

A lot of effort appears to be going into creating “technology available for licensing” lists as commercial services. This is pitched as a way to “market” university “technologies”. The competition is to create the list with the best features. The come-on to technology transfer offices is that using such a list is a best practice, and implicitly offices that don’t use a commercial list service are doing a poorer job of “marketing” their technologies. Behind all of this is the idea that the role of a technology transfer office is to get invention disclosures, patent some portion of them, and “market” them for commercial licensing. What could be simpler? I call this the “little linear model” to distinguish it from the “big linear model”.

The big model says: basic research leads to applied research leads to development and commercial products. Benoit Godin has shown that the big model arises from economists’ misapplication of lists of kinds of research in early NSF reports. Beyond this, it’s just a claim, albeit one that is embedded in things like the Federal Acquisition Regulations. The real problem is that people who believe the claim really try to make the big linear model “work”. What’s of course totally odd is that the federal government has no programs that I can think of that actually would apply the big linear model. There is no set of basic research centers, for instance, that are partnered with applied research centers that in turn are partnered with development shops to punch up prototype commercial products. There are calls for collaboration, and networks that partner various groups to compete for federal funding, and consortia that require industry participation to retain funding, but nothing that would say out and out, we are working the big linear model.

The small linear model, by contrast, is just pip of the big one, trying to use patent rights to leap from basic to commercial and get private investment to back fill the costs to do the applied workup and product development. The small linear model says that life begins at invention. The technology transfer starting point is the invention disclosure and the big early pitch into research labs is to get them to disclose. This is called “training”. It’s really immersion in office propaganda. Done really well, folks get excited in a way that makes one smile. Done poorly, everyone sees through it but are stuck at the suggestion to do otherwise will lead to ethics violations, legal fees, and bureaucratic overload. At disclosure, the technology transfer office swings into action, assessing the commercial and patent potential of an invention and making a decision (whether to manage or not). Mostly, the decision is to manage (tech transfer offices can’t let go), and in maybe one half to two thirds of cases, at least a provisional patent application is filed. The office then turns to the next challenge: find a commercialization partner. That’s where the commercial list folks come in with their “value added, high visibility, this is where your commercialization partners look for really good technology” pitch. Nothing could be further from the truth. O.K., well maybe that alien squirrels drive tractors in Iowa cornfields on days named after norse gods would be further.

Really, though, the small linear model isn’t even so cool that it could arise from the misconceptions of economists. At best it’s a replay, with hugely consequential lapses, of a more sophisticated approach to research-originated patents. Nearly all university technology transfer offices offer some version of the small linear model on their web sites as a description of the “technology transfer” process. Only such widespread repetition could preserve the dignity of it. One hopes it’s just there because someone demanded it be there, and that in practice folks do a lot of other things they aren’t allowed to broadcast.

What is really going on in the small linear model at the point that a technology transfer office has filed a provisional? First, for all but the best funded, the primary concern is, how will we ever recover the patenting costs? This will become much more central if the application moves from provisional to full utility, when the cost goes from sub $1K to $5K to $10K or more. For this, it appears that the best approach is to get out there and find a commercialization partner–at least one willing to cover the patent costs (and they pay patent costs only when they get to think of it as their own–exclusively licensed).

In the days before provisional patent applications, many technology transfer offices did not take title until they had tried to find a commercialization partner. They would prepare a non-confidential summary to make broadly available, but use a secrecy agreement to show interested folks what they were holding, pending a formal patent application (and its costs). The prospective partner reviewed not a patent application so much as the technical records that documented the invention–the disclosure, pre-publication manuscripts, data, the research team, and the like. You didn’t file unless you had a deal in hand. If there wasn’t commercial interest in 90 days or so, things were pretty much up. If the research team published, there was a year clock on filing, and foreign rights were lost immediately in most jurisdictions. There was no point, really, in a list of technologies because folks weren’t filing on everything. What you had was a lot of leftovers, and a relatively few patents that had got through the process but deals had fallen through, or were licensed non-exclusively. Where schools had sizable licensing income, they could afford to file “speculative” patent applications, without an immediate commercial partner. Provisional filings made this even easier. Hence, posting stuff in lists starts to make some sense, if you have a backlog of stuff, which most offices have.

Aligned with wanting to market stuff as soon as possible to recover patent costs is a concern that if a license, especially an exclusive license, is signed, other companies will protest that they did not have an opportunity to negotiate. This is a real fear, especially if the license is to a company with university connections–started by the inventors, say, or one that is a big sponsor of research at the university. The office needs some form of public notice to show that others didn’t come forward, even when the information was available. Worse, there is the concern if the company does not have tight university connections, that if public notice isn’t given, then the company doing the complaining would be one of those, and the technology transfer office would be accused of pissing off a major sponsor, major donor, or important faculty and administrators who wanted their piece of action. Finally, and all the more worse, for any company that is interested, an office always hopes there might be another somewhere more interested so there can be some bidding to show that the consideration for the license was at the best market rates available. These are all the downsides of the upside–that a license might really happen. But really, this is rare stuff.

There is a fun advantage to public notice, as well. If a company knows about your patent, and infringes it anyway, well that’s willful infringement and treble damages. Any way one can get a company to bite in a documented way puts the ratchet into what can happen. Thus a public list, especially if it’s subscribed to by companies, can be a great way of upping the ante for a failure to license. Given that most technology transfer offices are predisposed to exclusive licenses, that ante remains for any lucky company that comes away with first rights to enforce plus sublicensing rights. In this way, a list helps “add value” to the patent right, at least, by raising litigation damages. Of course, doing so also signals companies that litigation is an option for universities, and one in the full hug of the small linear model will say something like, “if we don’t enforce our patent rights, they won’t be worth anything”. Which of course isn’t quite true, but sure sounds good.

A second concern arises based in the cold realization that nearly all the patents in the portfolio are unlicensed, or licensed into moribund situations in which not much at all can be said to be happening. Very few patents create a significant revenue stream, regardless of the “marketing”. The second concern then is how to persuade the inventors that the licensing problem is anything other than the technology transfer office’s efforts–especially, that there wasn’t any commercial interest (for any of a number of reasons), though the office did everything it could (within the limits of best practices). To do this, the office needs to make a show of marketing the invention. That’s where the list folks see their point of entry. By presenting as “best practice”, the list folks shift some of the burden of “marketing” from the office to the list. It’s okay. There’s nothing particularly wrong with this, once one has accepted the small linear model.

Putting “technologies” on a list is a little monument to 10% hope and 90% CYA. Next to nothing ever happens off a tech available list. One could offer a free lottery ticket with every office visitor and get more action–and at a lower overall cost. These lists have much more overhead than may be obvious. It takes work to prepare a “non confidential summary”. For really good ones, some offices have budgeted as much as $2K per. It takes even more work to move from an NCS to the kind of mark up needed for lists–fields, keywords, importing, uploading. And then there is the maintenance of the information–patent applications filed, status, new related disclosures, publications, possible negotiations, and the like. It’s very easy for a listing to be stale in a matter of months, and if it isn’t, then it was likely stale to start with since nothing at all was happening worth updating. Oddly, this leads to the unexpected outcome that the most interesting technologies in a list are those that are stale in a few months because the office has too much load to keep their listings updated and shifts its priorities to other things, like working up new related developments. But no list will mark stuff as interestingly stale, so it really isn’t all that helpful. The point is, these lists are not like an MLS for real estate. The intangible assets reflected in a list should be changing, should be dynamic across the state of the technology capabilities and the state of the intellectual property claims that develop along side. Thus, as a marketing instrument, a technology available list is remarkably prone to being uninteresting to active entrepreneurs and industry technologists compared to maintaining direct connections with labs having a reputation for leadership in an area of interest. For technology transfer offices, participating in a list means adopting the premises of the list–especially the small linear model–and devoting the resources to keeping the list updated. In essence the office services the list even though its primary business arises elsewhere, adding to its own overhead, but making the justification that the list does 90% of the good things one needs: give public notice, make a show of marketing, use an external best practice, and demonstrate the lack of commercial interest wasn’t the fault of the office–it was technology not ready, inventor expectations too high, a funding gap, or a lack of innovation capacity in industry. Yeah, those things.

Where to take this? Should a technology transfer office not list “technologies”? Not give notice? Not make an effort? That’s a false dichotomy. It’s not a matter of listing vs. hiding/indifference. That’s just the marketing angle for the list managers. Are there better ways to “market” technologies? For instance, by including contact information and intention in the acknowledgment caption of published articles? But even going through this still participates in the small linear model, that somehow the first thing to do is advertising, that that’s what marketing is. We know our four p’s of marketing suggest at least there is a much broader scope than just the promotion p–product (what it is you really have–usually not a “technology” but a “bit of something”) and position (or placement–the conditions under which the product surfaces and for whom) matter more than promotion and price when it comes to research-originated stuff (whether early stage or otherwise).

More so, one can ask whether getting beyond the small linear model might be healthy for university technology transfer offices. It may be that it’s entirely inappropriate for a research-based technology transfer effort to begin with “invention”. And it may be that for most research, the conditions that lead to a decision of any particular inquiry are way more important for the assets that develop along the way–data, software, insights, capabilities, collaborations, testbeds, reports, and yeah, inventions–than is the articulation of any invention stripped of these other matters. Putting an invention out in a list all but yelps, “Hey, look, we stripped this thing out of a research lab and now it’s primary value is to scare the bejesus out of anyone who imagines that we’re desperate and will license to the first speculator who shows up, with diligence obligations to make us rich and so might just have to come after you as a contractual obligation, we can’t really help what they might do, it will be a commercial decision you know, so the best thing to do is to step up and take a license and that way we’re all happy, no one gets hurt.”

The small linear model makes tremendous assumptions about research environments. It may be that these assumptions are gratified only often enough not to look ridiculous. That is, research ideas and data and tools flow into and around research endeavors. The thing that arises is connected to these, by people, by attribution, by sharing technology, by competing against technology and intentions. The patent strips all this, creates an exclusionary threat, and establishes a monetizing interest that is direct (a license one pays for) rather than indirect (an interaction that advances other interests, such as further research or placement of students or recognition).

We may say then, that it is grossly unfair to criticize a technology transfer office for not “marketing” a “technology” hard enough. It may be that the technology transfer office has no business “marketing” anything, if that means advertising it. Further, whatever marketing that might happen might better wait until a “market” is developing for a given technology–and that the technology transfer office could better turn its efforts toward what it takes to create such a market. It may be that from a positioning perspective, a technology could be much better seen arising through publication than from the technology transfer office. Or at the lab’s web site, or through a professional organization such as IEEE or ACM, or a mention in Wired or Business Week. It may be that a research-originated invention is best never becoming the lead asset in a relationship–that what should be marketed is talent and commitment and capability, and not a patent right that goes with.

All these “may bees” don’t impress the small linear model crowd. They have no other way of viewing what they do, and do not feel the need to spend time getting at the distinctive role asked of patents under federal research policy, or how different forms of inquiry might lead to different management of intangible assets, or how one form of intangible asset (invention) might benefit from, say, lack of ownership claims more so than from establishment of those (patent) claims. The may bees are realities, however. The biggest problem for the small linear model is that it is so limited in its application. The small linear model has going for it simplicity, the seemingly rational idea that innovation logic should follow chronology (we start with getting out to look for inventions, and then get disclosures, and then assess for potential, and then file, and then license, and then share royalties–that’s how innovation happens!). Technology listing services not only play to this, they have a vested interest in preserving it. The moment the model flies apart, they are toast.

What to do? If you are a university inventor, then don’t buy into the small linear model. (This does not mean, don’t follow policy). It will get you mostly bitterness, and it’s an empty pleasure to be proven right on a negative. If you are a university administrator, focus on those things that advance research toward public use–even if those uses are research uses by others. Remember that federal policy is “to use the patent system to promote the utilization of federally supported inventions”. It doesn’t say, monetize or commercialize or productize–though these things are allowed. The focus is use, so the benefits of that use are available to the public. This a goal that the small linear model loses sight of quickly. This is also a goal towards which the small linear model has virtually nothing to offer. At best, it argues that by seeking commercialization partners and generating money, it advances the public mission of the university. The technology list folks say–“we can help here. We can provide a marketplace for sellers and buyers. We can deepen the commitment to the small linear model. We can make a living in this space.”

There is no indication that improvements in technology lists will result in major improvements in the performance of the small linear model. There are ways of creating social spaces in which technology advances and opportunities are created. This does happen, and broadcast of lists can play a role. But for these to happen, people have to step back from the small linear model and get a sense of perspective. That’s a great challenge.

Posted in Technology Transfer | Comments Off on Technology Lists

Co-marketing IP

Why do so few universities coordinate their patent portfolios? And why is it such news when they do? The demands of “marketing IP” in the patent broker model are such that it’s a huge drain of energy and apparent loss of value to consider co-marketing. Even dealing with the overhead of an inter-institutional agreement where there’s co-invention takes a lot of energy.

Co-marketing comes in various forms–bundling, co-listing, offices informally helping each other out, and formal plans that allow representation by one institution for another. Some such “innovation networks” have been tried in Canada, for instance, where smaller university offices have banded together to try to find complementary skills, or smaller offices have formed a hub with a larger office that then “handles” their technologies. At the core, however, these co-marketing efforts still reflect a fundamental “marketing” approach to IP. That is, the purported aim is to identify a commercialization partner willing to invest sufficient funds to put a royalty-bearing product on the market or to generate sufficient activity that a larger concern is willing to buyout a start up concern. Continue reading

Posted in Commons, Technology Transfer | 1 Comment