Is This as Good as It Gets?

One of the biggest problems with university technology transfer is that it cannot manage deliberative rhetoric. Everything is criticism, and the criticism is construed to attack the idea of technology transfer, Bayh-Dole, and/or the competency of those working in the field.

Let’s be clear. I’ve been critical of AUTM. As an organization, it has taken lobbying and legal positions that are not well founded, it has done so without consulting its membership, and it left no room for minority positions. This is true of the discussion of agency and in the matter of Stanford v. Roche. I truly *don’t care* who “wins” that case. I *do care* that AUTM takes a stand antagonistic to faculty inventors and promotes a zany reading of Bayh-Dole that works toward treating basic research inventions as commodities.

I’m critical of the recent NRC report on technology transfer because it does not grapple with issues that face technology transfer and instead offers up lines that might sound good but aren’t coherent, as if it all really doesn’t much matter as long as it’s something folks want to believe.

I have challenged the idea that the Carolina Express start up license is the grail of university licensing. Nothing against those that drafted it. Good job. It appears, however, to be just another non-descript biotech start up license that picks and chooses among many variables. For the life of me I can’t see why anyone would think it was a grail, or why it would be held up as a standard to be adopted. At best, a university might say, if we offer one start up a deal, we will offer any future start ups that same deal if they want it. And even then, one might wonder why a university thinks a deal is simply in the terms on paper and not the relationship of the paper to the university context in which the deal presents, and to the company operating model in which the deal lands.

All this might lead one to believe I’m grumpy. Far from it. I’m very engaged in technology transfer projects and supportive a wide range of efforts, from conventional patent licensing for commercial product to open innovation to social ventures. What I don’t see in technology transfer is any kind of deliberative public rhetoric. I don’t see diversification. I don’t see local strategies. Instead I see a defensive paranoia, a desire for superficial emulation of “success”, a code of silence about performance data, and an academic community that works with surveys and assumptions to make policy recommendations without having much of a grounding in practice, whether that practice is what people do now, or what people could do if they were directed to do it.

None of this bodes well for research innovation practice. Not all criticism is threat. Not all public discussion is led by ignorant folks who just don’t get it. Certainly that public discussion isn’t advanced by academics doing surveys or pawing through others’ survey data for what they take to be economic gems. I don’t believe a national innovation system ends up appearing nice and orderly on a chart, some linear model of discovery to product compulsively helped along by money-making patent licenses.

We have a huge gulf between the challenges we face and the rhetoric people are using to face it–or worse to avoid facing it. Technology transfer is tremendously challenging, quite apart from anything having to do with patent licensing. Transfer involves changing the status quo across a pattern of practice, and often in ways that those directing the status quo are not comfortable with. In proposing technology transfer, the new often presents in opposition–or at least skew–to the status quo, potentially undermining established investments, jobs and expertise, in return for a claim that the new thing is better, not just different, not just a source of revenue via a shakedown.

All this means that technology transfer, before it is legal or business or science, is political and social, and the politics is neither that of suck up nor revolution, but of creative disruption. Certainly law and business and science come with a lot of technology transfer, but one misprises the activity in thinking that one has to lead with these things.

Technology transfer is also, therefore, underestimated in its inherent complexity. Instead, we get a seductively simple story about linear progress of invention to product to justify technology transfer’s existence, but then people believe that simple story and form an opinion about competence. If it is so simple then why are the results so cruddy? Why are universities making a lot of money on one deal a decade and making it seem lots of people are successful when they are not? That must be hypocrisy on top of incompetence. Come back, I’ll bite your leg off kind of thing, if you know the reference.

For all that, even if university technology transfer offices were doing great work, it’s not apparently enough, because their personnel are not the celebrity investors that work with the power elite, and so in general anything productive on a system run by hard working unknowns just isn’t very, er, flashy. That’s the argument, anyway. Worse, it appears that a lot of technology transfer folks come to believe the linear model story in its simplicity, or at least put on that they do in public. It’s the only thing they apparently are allowed to speak to without fearing for their jobs or their standing within the community.

But technology transfer is in practice diverse, supremely challenging, and beyond the reach of most administrative “systems”. There may be an efficient process to file a patent application, but I don’t know of one–or would trust one–to tell me why I should do that. Every so often a given approach hits a sweet patch and produces a few deals in a row, maybe for two or three years. Folks attribute their productivity to the model, and ramp up. Then markets shift, research shifts, investors wise up, industry reacts, technology transfer personnel change, and things have to adapt, or extend—almost anything but narrow the focus to emulate the superficial parts of past successes.

In reality, after 30 years, universities are running patent accumulation shops, just like the government agencies, but with more freedom to grant exclusive licenses, with less public accountability. While they generally say that they evaluate each invention on its own merits and aim to license it to industry for added value, in practice this is not what happens. That is, the public intention is not square with the practice outcome. In a portfolio model, only a few big deals support the office income over a decade. The rest of the inventions under management are grist. Throw everything against the wall and a few stick. This works for the money side of things, and even for the status of technology transfer programs, but for most university inventors and their research, this means that their work is hung up in patent rights they don’t control or are licensed into moribund enterprises, with no way of escape.

Let’s put it out as a general rule: Any university research patent that is not licensed and developed is a threat against widespread adoption of the underlying technology.

We can add corollaries. Any university research patent that is licensed exclusively but not developed is anti-competitive. For any university research patent licensed exclusively and developed only in part, the undeveloped portions are anti-competitive. Any university research patent for which the intentions of the owner are not consistently clear creates uncertainty and works against adoption.

When we look at university patent portfolios, the issue isn’t whether some few are making big bucks. Excellent. The issue is what is happening with the rest. The answer is not: those would be making big bucks too if the technology transfer office was competent, run by celebrity entrepreneurs with access to the power elite, and was more efficient. My argument is, in a portfolio model, even with competence, celebrity, and efficiency, things will be about what they are. That’s the way this sort of model works, given its raw materials and tools.

Yet, the critical argument takes university technology transfer at its claims, that every invention matters on its own and isn’t just grist for a numbers game that works out to one big deal in 1000 research inventions. The universities don’t even share revenue from the big deals across all the inventors whose work turns to chaff in the technology transfer office. I’m not into defending the criticism, but rather in changing the claims about how technology transfer works, and more specifically about how the little linear model with patent accumulation actually does operate.

Within the little linear model with patent accumulation approach, the effort to license is about as good as it is going to get. AUTM representatives argue as much. They don’t say, we have a lot to learn, innovation is a great ever-changing mystery that stumps even the elite and rewards the lucky. No, instead we get that after 30 years, university technology transfer programs cannot possibly be sub-optimal. Programs still starting out will mature and inherit roughly the same operating methods and problems as the others, and that will be that. Many that work in these programs are very good at what they do. Some, of course, are not, and they soil the nest for everyone.

There will be success stories to tell and a way to put best spin on the financials, like multiplying licensing income by a typical royalty rate and dividing by a typical pay to push the idea of job creation, when there’s little direct evidence at all for job creation. Within the model, there are challenges but they are mostly in training faculty to conform to the model, senior administrators to understand the model at all, and businesses to accept doing deals on the terms the model puts forward as optimal, if not grail-like. Fine. Not the issue.

A second general principle: The operations of a given technology transfer office are not the only way to do technology transfer, even if the technology transfer office claims to be able to do “everything”. Usually this discussion slips over to an enumeration of all the ways information gets published and students graduate, as if *that* is technology transfer and people have been doing it all along. Of course, but actually no. There’s a gulf between some information being available and the focused effort to go after the status quo with a technology. A technology is typically broader than a patent right in a research invention, is broader than whatever gets said in a classroom, is broader than the naughty bits that show up scattered across five or six archival journal articles that might show up two or three years after the insight.

Transferring a technology is more than RTFM. More than mere access. More than licensing rights. In this, invention does matter. Invention matters because it represents something uniquely new in the world, and that ought to be one expected aspect of research. But there are others, such as discovery (which may not be inventive), epiphany (which may not involve discovery), confirmation (which may not involve epiphany), consolidation (which may not require confirmation), and engagement (which may work against consolidation). It is fine and good to look at invention. Bayh-Dole asks universities to do this—have a look at inventions made in the federally funded research you host, and if you see something you can work with, elect to retain title and use the patent system to promote practical application. That’s a good start. But that start is not technology transfer. Nor is focusing that start exclusively on commercialization (meaning new products and start up companies) a meaningful implementation of Bayh-Dole. Nor is it even a productive thing to try to extend Bayh-Dole operations to include inventions made with foundation funding, or with industry funding, even though it might look more “efficient” to the policy manager to have one program general enough to handle all kinds of research invention.

What sort of policy manager could possibly see this as more “efficient”? One that doesn’t care? One that thinks inventions are commodities? One that loves process over rare events? I know, there is “no evidence” that process-loving, one-program policy managers are any worse at things than rare-opportunity, diversity-loving policy managers. And I suppose there won’t be such “evidence” until academics get around to conducting a survey of what the mass of whomever opine about this stuff. There’s no point, I guess, in *thinking about it based on experience*. No, that wouldn’t do. Experience merely sullies the clarity of the abstract, and even then, outlying signals are easily excluded from the survey that reports the mode as if it is best practice and represents “market needs”.

Transferring technology is more than little linear models, but includes little linear models. Folks suggest alternative models. There is no reason to think, replace one little linear model with another, or replace all little linear models (whether through a one-stop we-do-everything-patent accumulation program or, say, through specialized agents) with something else (such as, let industry own all research inventions and make your money on sponsored research overheads, state subsidies, and gifts). The point is: technology transfer involves a diversity of methods, of programs, of combinations of talent. The little linear model is one of these. A focus on patent rights is one of these. Even a wholesome fixation on commercialization is one of these. Good. But when a program argues that it is the necessary method, secures policy statements to exclude other approaches, treats these as exceptions to policy, and sees any efforts to spin up additional approaches as threats, then one starts to see how a good program in its way can become a block to technology transfer generally. By demanding compliance with a given model, one is demanding folks turn away from other opportunities. By creating review overheads for other approaches, one puts those approaches at a disadvantage. By holding research patents that are not licensed, without any statement of intention, patent accumulation programs create uncertainties for further research by other organizations and for adoption by the practice community or more particularly by industry.

Technology transfer is not simply linear. It does not simply start when there is a research invention. A research invention may be well along the way of already existing pathways to practice. A research invention may be a part of a much more robust whole. Look at nanotechnology. Universities have fragmented carbon nanotubes into so many bits of patent real estate that no one could possibly put together a play to get access to rights to develop something for industrial use. May as well be like Afghanistan, with tribal war lords covering their few miles of a trade route. It’s fine to have a program that deals with the simply linear. That’s worthy, difficult, important. But it is not the only way. I know, so long as folks that like the little linear model work to prevent anyone from demonstrating any other way, there is “no evidence” that any other way could possibly “work”, and of course, why bother trying anything different if there is no evidence for it? How, one asks, in this morass of reasoning, does anything new come to happen? In this way, the mainstream technology transfer offices, especially as represented to the public these days by AUTM and by policy folks, come to appear anti-innovation. It is their own methods that they oppose changing. In terms of research technology, they are the status quo of management.

Rather than rig for the rare event, the weak signal, the initiative in need of mobilization, they rig for volume, consistency, compliance, efficiency, and avoidance of risk. The policy that requires all inventions to be disclosed to a technology transfer office, regardless of sponsorship is also the policy that requires all important signals to be mixed with a bunch of compliance noise, and is the policy that implies that research personnel lack the expertise—and should not be given standing—to make decisions regarding what they have invented, as if they are employees to be managed for commodity development rather than experts asked to work on the very edge of human knowledge. What strange policy! What an odd thing to say, looks good, might need a bit of tuning, but don’t we all?

If you’ve read this far, you might see how the idea develops. A really good, challenging, worthy thing—little linear model patent accumulation—created and performed by mostly decent, hard working, competent technology transfer folks—comes to stand against broader technology transfer practice precisely as it tries to become uniform, tries to beat back proposals for alternatives, tries to do all things for everyone. My fight isn’t with the little linear model. It is not with the competence of licensing officers. It is certainly not with making the little linear model more “efficient”. My effort is to put this good thing in context, to show what is possible that it cannot touch, to show how it casts a shadow on lots of other important things that a university must be involved with to do its research and instructional job in society.

In this, I am also not advocating for stuff to just “flow” out by way of publication, presentation, and graduation, in some natural stream of things. I don’t have much confidence in the stuff about how technology transfer has always been done by universities through whatever universities did before they got interested in patents. Even if there was a time that was true, it was under different conditions than we have today. There was a lot less government research funding, the faculty had much greater freedom, there was no compulsory patent program, and industry played with different rules and tools. Most importantly, universities had not rigged for such singular focus on obtaining government grants, organizing faculty status around those grants, and setting up administrative support on the premise that it is more important to get the next grant than to do anything with organizational focus on following out the relationships possible based on the grant work underway or completed. In big public universities, there are something like 3 times more development officers seeking gifts than there are research administrators, and there may be 3 times more research administrators than technology transfer officers.

Yet, everything is apparently fine with this picture. The public message is: Technology transfer offices are optimal, just need a little tuning, and smaller programs might want to give it up because well they are small, will never amount to much, and can go take a dump without getting in the way of the really important big programs. Getting research dollars and doing the compliance necessary to get more research dollars is way, way more important than research outcomes. Or, the research outcomes that matter to universities are not ones that matter to the public. To the university, study of, say, cystic fibrosis is a way to generate research income and status (Look at how much money we brought in for research! Look at how much we spend in the region! Look at the students we trained to be researchers!)

That is, research outcomes don’t have to do with discoveries, inventions, epiphanies and the rest. That’s a throw-away. Pick high value patents and make some bucks and the rest “just happens” like it always has—except now the inventions are compulsorily owned by the university, have to pass through its little linear model, run as exceptions to the policies it has in place, and too bad if the inventive element is minor compared to other intangible research assets of value. The rest is throw-away. The job is to make a show of making things look good so that the public research dollars keep flowing, the public believes that transformational innovation to improve their lives is just around the corner, and that universities are working on making this happen. But of course, they are working 10 times harder to get gifts in competition with other charitable interests in the region, and 3 times harder to get more grants. Those outcomes of research, they just don’t much friggin’ matter so long as the research dollars keep coming in, and every year the paperboy brings more, or, er, no matter what a university does with past research, there will be more research dollars the next year, so long as the public doesn’t sour on the investment and the university doesn’t present as irresponsible.

In this view of things, so long as technology transfer office advertises the potential of research innovation, has a big hit a decade to make the numbers look great, and keeps major industry powers happy, its job is done. Maybe some tuning here and there. No matter that 80% or more of the inventions it claims aren’t licensed. No matter that Bayh-Dole puts university diligence obligations on each subject invention as a matter of a particular award contract, not on just a few a decade. No matter that only one or two things out of hundreds make it to actual use, let alone products. In this view of things, so long as the research dollars keep flowing, and one is covering costs, it’s all as good as it’s going to get.

Perhaps this comes off too harsh. Clearly, technology transfer folks don’t think this way. They want the best for each and every invention they take under management. They believe that by promoting potential one creates conditions for realizing that potential. They would like to expand their success rate eventually from 5% to maybe 6% or 10% of reported research inventions that get licensed and move to practical application so that the benefits are available to the public on reasonable terms. They see their challenges in dealing with skeptics, with folks that would shut them down as antithetical to university norms, with uncooperative faculty who in AUTM’s view are greedy, dupable, and incapable, and therefore to protect the public interest their personal invention rights must necessarily and expeditiously be removed from them and handed to altruistic, prudent, and capable administrators as a matter of federal law and top notch research innovation policy.

Against these challenges they must engage in policy fights, beat back popular misconceptions, and aim to transform university culture to accept a technology transfer office as a core, well funded program. For these things to happen, the public must see a simple, clear, unified message: technology transfer offices are successful. There can’t be any discussion of whether the claim of success stands up, or whether money is a decent proxy for that success, or whether that success has happened any time recently or stems from patents filed in the 1980s. There cannot be questions about alternatives, there cannot be discussions that ask whether Bayh-Dole could be improved, or federal research funding conditions, or university administrative approaches. The path we are on is a great path, the energy is to pursue this path, there is not energy left over for anything less worthy.

The evidence for practice does not exist. It is not collected and reported. After 30 years of Bayh-Dole, it appears no one cares that this is so. It is as if there is some secret pact to keep things unknowable. To my knowledge, no university reports the following:

Number of inventions reporting federal funding (i.e., subject inventions) in a given year

Number of those subject inventions in the year
to which the university subsequently elects to retain title
to which an agency claims title when the university does not
that are retained by the inventors with agency permission
that enter the public domain

Number of those university claimed subject inventions in a given year
on which on or more patent applications are filed
and whether one or more patents issue
and how many of those claimed subject inventions are licensed
and whether the license is exclusive
and whether there is more than one concurrent licensee
and whether at least one licensee has practically applied the invention

For those claimed inventions that are licensed from that year, how many
result in commercialization
generate licensing revenue
to recover costs incidental to managing inventions
in excess of those costs
and what was done with any such excess
contribute to industry collaboration
follow on research
development of standards or open technology
workforce development
create American manufacturing jobs
support small business
start ups
existing small businesses

These are core parameters of Bayh-Dole. Much of this would be in the annual invention utilization reports that universities are to file with federal agencies. But these reports, if they even do exist, are protected under Bayh-Dole by FOIA, so how could anyone tell what is going on? If one set up this way, when a subject invention was reported, a file opens on a public site and that file can be tracked to conclusion. No technical or business details of the invention have to be revealed. And these details may change over time as work progresses or ends. Yet we don’t have anything like this. So there’s no way to have the discussion. Even universities for the most part don’t know how their patent portfolios stack up over time, and apparently don’t care to know, and don’t care for the public to know. Understandable in light of the above? Perhaps. Perhaps it’s just too much of a bother to set up such a record, even on paper.

Perhaps it’s a threat. What if only a few inventions are making progress? I say—then that’s an opportunity to tell a clear story about the realities of patent accumulation licensing. That’s the point to set expectations. That’s not necessarily a matter of competence or sub-optimal models or resources. Perhaps that’s the reality of the model. It just doesn’t get any better, other than one might make it more efficient—that is, spend less on stuff not going anywhere and close the big deals with less waste effort. Doing that won’t expand the number of technologies that succeed, other than if in getting big deals done expeditiously that frees up resources to work on the lesser deals. Under it all, I suspect it is this “if only” intention that covers for the problem: if we only got a really big patent license we’d have the money to hire people to make the heroic attempt with less well positioned technology, at least keeping faculty inventors happy even if the overall numbers don’t change much. Thus, keep at the big patent deals because that’s where the future is for everyone, even the little loser deals the winners have to pay for.

One might ask, however, whether this “if only” expansion dream of the patent accumulation model really makes good sense for innovation policy. There is likely no evidence for it one way or another in the academic literature because no academic has done *the survey*. And of course what’s the good of innovation policy discussions without a survey? How could academics know anything about technology licensing without practicing it, or at least observing it closely, and even then, how far can one generalize from a local experience to broader practice across a range of situations? Yeah, it’s a difficult problem unless you make uniform and simplifying assumptions and then aim to prove them out as if the assumptions are hypotheses to be tested as if they are part of a “model”. How nomothetic! My sense is, folks don’t get this far toward reasoning it out. The “if only” part is sufficient. A good intent. And an effort to make it so by pinning down behaviors to the model one has. Like 40+ universities wanting to make Bayh-Dole automagically vest title in inventions with university administrators.

Criticizing the little linear model with patent accumulation isn’t the deal. One can do that with or without numbers. University inventors whose work becomes grist only need to know that their inventions are not going anywhere, and that the “if only” resources won’t be coming around to make things better any time soon. Having more numbers doesn’t deal with the problem of being caught half way, with the university holding patent rights and having sunk some $10K into an application and no way to get further along unless things get remarkably lucky. Numbers, however, can provide a basis for reasoning about the life of subject inventions. It may be that a little linear model could back off claiming title as often and be more selective about things. It may be that other practices could be developed that focused not on commercialization first but rather on industry collaboration or assistance to existing small businesses or to promoting broader, focused, coordinated research activity (not merely at one’s own institution).

In recent years, universities have built out a start up practice to go with their industry licensing practice. A few have explored a software practice as well. And a few have a focus on industry collaboration. We don’t have much in the way of accounts about how these programs operate. We don’t have much in the way of accounts about how programs coordinate within a region to collectively support research innovation. It may be that in a region, one university pushing collaboration while another pursues commercialization results in better overall outcomes for the region than both universities trying to do the same thing in the same way with the same industry connections. It is certainly *simpler* to imagine every technology transfer doing the same thing. The claim might be, that’s transparent, that makes it easy for some putative, otherwise confusable person in industry to get the same deal everywhere, like eating at a McDonalds. One might claim, that would be more efficient, and there would be more commercialization. Only if.

I’m not buying it. Even though technology transfer is fundamentally social, I don’t see a move to simplify it as the issue, or to complicate it. The issue is to meet social concerns as the arise. Friendships and collaborations are not regularisable in the way of contracts and patents. Even patent contracts assume an immediate reciprocity (grant for consideration) rather than externalities for others. Why should universities so fixate on patent contracting as the only way to implement Bayh-Dole, or more broadly, support for following research findings into practice?

If expertise in a university is at its edges, then that’s where technology transfer has to play—not sucking things back to central control, but using central powers to coordinate and push energy out to the edges. The whole metaphor of reporting inventions centrally isn’t in Bayh-Dole. Bayh-Dole doesn’t require central reporting of inventions in a university. It is university management that has imposed this. What if invention reporting was pushed to the edges? What if the only matter was whether a university would support an edge request for funding to pay for a patent application? But what if that edge request could be paid for by other parties, not the university at all? Perhaps that might change the social network relationships that set up, and push edges out rather than pull them in.

The world is incredibly rich in its details. And it is in the details that things matter. And for innovation, the details that we cannot possibly find with standardization are the ones that erupt with value. Those details find us, more than a standard, efficient, orderly pattern recognizing process finds those details. A technology transfer office that triggers off sameness is at a huge disadvantage. I know, there is no academic study that delivers survey evidence for this assertion. I know, the only evidence that matters is survey evidence created by academics and organizations. The only good administration policy is one that already has survey evidence supporting it when it surfaces. That way, leaders can stay leaders, and outliers can be ignored as failed and foolish. My premise is the research premise: the world is incredibly rich in its details, and we are no where near the bottom of that richness, and we won’t get at it, when it comes to research innovation programs, by building out only in patent commercialization, and only on such limited models, and only in response to ideas that are decades past the conditions that present now, at the edges, in so many ways that the little linear model cannot respond to, let alone recognize.

It is this view of things that leads me to grump at the folks aiming to lead or direct university technology transfer without acknowledging the expertise at the edges, the qualifications and even worries about the models they deploy with such confidence, the flip sides of the success stories. I don’t see a respect for the challenges or opportunities; instead, I see the repetition of the same apparent problems and putative causes year after year, and I see an expansion of uniformity and self-congratulation and spin but not the metrics that underlie even the most basic black and white policy objectives.  I don’t see responsiveness to emerging circumstances or areas of research intensity. The model that “worked” for biotech (or at least found activity with biotech) never “worked” for software or the internet or bioinformatics. That model didn’t work for nanotech, hasn’t worked for energy, and is unlikely to work for environmental technology. It hasn’t worked for research tools, for services, for municipal technology. It doesn’t work for traditional knowledge, for social ventures, for indigenous resources. It especially doesn’t work for foundation and industry sponsored research. As well, the model doesn’t work for multi-institutional research, doesn’t work for open innovation, doesn’t support standards formation (and in fact works against it), doesn’t work for pre-competitive collaboration, and doesn’t handle issues of regional or national competitiveness. Yet we don’t see any move to revise, extend, mobilize, or even critique it, except to complain that perhaps it could make more money with more technologies, if only. I guess there’s no academic survey to give legitimacy to these observations.

I don’t mind the little linear model. It’s a fine thing. But it should not dominate the party. And it should think a lot harder about what it is good for. And it should work a lot harder to rein itself in and leave some space for folks who take other approaches, even ones that will compete with the little linear model and its practitioners. That diversity is what I’d like to see. I think it will even help the little linear model and those committed to it, certainly much more than making patent title compulsory, spinning success to the public on the thinnest of grounds, ignoring or suppressing reporting of the metrics that matter, and using licensing efficiency and revenue as a proxies for success. So even more, I’m dedicated to technology transfer in its broad development, in the richness of local detail, in the push back to central powers what the edges need to be successful at what they aspire to. In the ordered domain on the edge of chaos, say. And seeing chaos as opportunity and order as a work product not a pre-condition. Is that too much? Is it worth being blunt at times, where there doesn’t appear to be a whole lot of reflection within the technology transfer community? I think so.

This entry was posted in Bayh-Dole, IP, Literature, Metrics, Social Science. Bookmark the permalink.

2 Responses to Is This as Good as It Gets?

  1. Thank you for stepping back and producing this overview. It’s still a bit densely written, but many of your recent posts have been so *close* to specific litigation at hand and so sour and sarcastic in tone that it’s been hard to get an overview of your argument. Here I think I get it, and largely agree, especially as to the group-think that prevails in far too many TTO/OTCs. Google’s “link:” function tells me that my own site (the home page blogroll) is so far the only one to link to yours (I guess you’ve got the AUTM cooties, but I don’t really care since I’m not a member of that fraternity either). So I’m going to exercise the prerogative of a blogroller to offer a little comment. I have learned a lot by reading your take on technology commercialization, enough to regret some (but by no means all) of what I’ve written on the topic at my own site (best findable with a search like “site:tbed.org bayh-dole”). I do think we’re in broad agreement that too much useful data goes unreported, that there’s too much fixation on a model that seems to work only part of the time, that there’s a lot of greediness with respect to work that is not federally funded, and that even offices that say they’re interested in more than revenue maximization don’t often operate that way. On the other hand, I remain concerned that your discontent with the narrowness of thinking out there is being leveraged by those who would attack the statutory structure of Bayh-Dole, when I think you have actually made a very convincing case that university culture rather than the law itself is the main issue. I would recommend that you know step back from continued criticism and make some very explicit and constructive recommendations for the kinds of policies and procedures you would like to see American universities adopt within the current statutory framework. Who knows, some innovative director (but probably not one of the ones making a lot dough for their respective VPRs right now) might even try the experiment!

    • Gerald Barnett says:

      Bayh-Dole undercut flexibility at federal agencies, all but ended federal oversight and university accountability, and created an incentive for predatory university administrative IP practices. The “current statutory framework” does not require universities to even *have* a patent policy. The current statutory framework has largely been ignored by university patent administrators–2 CFR 215.37 doesn’t exist. Nor does 37 CFR 401.9 or 37 CFR 401.14(a)(f)(2). The only step for universities is to drop the requirement that inventors assign anything they invent to the university. Make assignment voluntary. The university then may spend its policy energy on determining when it has an equitable interest in inventions made with express university support, and behaviors within the university expected of inventors who seek patent rights.

Comments are closed.