How I got into this mess

I got my start in university technology transfer as a graduate student at the University of Washington. I was working toward a doctorate in literature and interpretation. My dissertation dealt with the representation of text, using medieval manuscripts as a base to show how different representations–facsimile, transcription, critical edition, modernization, translation–led to different patterns of reading and interpretation. A key portion of my work involved developing an in-line markup system similar to SGML that allowed a text to be presented on a display screen various ways just by changing a few settings–this was all before HTML and the Web. Three events in succession caused me to tumble down the rabbit hole of technology transfer, IP, Bayh-Dole, and all the rest.

The first event happened when I was working at the Humanities and Arts Computing Center as a graduate student programmer and coordinator for operations. I was the first employee of the new Center, and dealt with most everything that had to happen–set up computers, manage scheduling, write software, help people learn how to use computers and software, and manage that new Kurzweil trainable optical character reader. One project I took on was designing fonts for specialized alphabets. I had done a Middle English font extension on my own (Apple II based, using the ACE programming environment, to drive my Apple dot matrix printer). Now I developed fonts for Old Church Slavonic, Cyrillic, Turkic, Amharic, and an IPA-based version of Lushootseed. For that–this was before Metafont was widely available–I wrote my own font editor for 24-pin dot matrix printers (the latest thing) and EGA graphic boards (also so cutting edge for consumer computers like the IBM XT and AT). I also wrote code for language drilling (a program called Flash) and developed custom menus for a partially open text editor called PC-Write.

One day we at HACC (yes, “Hack”) received a visit from a Technology Licensing Associate from the Washington Research Foundation, the “tech transfer” organization that was the primary invention management organization for the University of Washington. This was about 1985. The TLA gave a nice talk to us about patents and licensing and how if we reported our inventions to the WRF, they would do the patenting, license the patents to industry and we’d get rich. Leroy, our visionary faculty director, thought that licensing might be an interesting way to fund the Center, given it was always difficult to get any funding for computing in the humanities. But mostly, after the TLA had left, we all laughed about the idea of filing patents on our stuff and went back to work.

I went back to reading Richard Stallman’s GNU Manifesto, which had just come out and was a defining text for how to use copyright to undo many of the abuses that copyright could exact on coding projects–copy-left, as Stallman had it, or “free” software (“free” as in speech, not “free” as in beer). Though I haven’t agreed at all points with Stallman’s ideology (I’ve had my battles, and I’ve helped, depending on the situation), the GNU Manifesto is a formative document for anyone thinking about the role that intellectual property plays in the development of new things. The base idea–that IP rights can be used to restrain undesirable IP practices–is tied up in the broader idea that a key role of an institution devoted to liberty (such as a democratic government in a one-nation republic) is to use its authority to place limits on its authority–a Bill of Rights, so to speak. This idea of authority working against itself to create a space for freedom within social institutions is fundamental to the application of intellectual property to innovation. There’s nothing more basic to understand, short of the mystery of human motivation.

The second event took place about the same time. One of my projects as a graduate student involved working with the manuscripts of Piers Plowman, a 14th-century English work–alliterative verse–that exists in over 50 different manuscripts, in three distinct versions (A, B, and C–plus the goofy Z, and some A+C combinations and the like). I was using my Middle English Editor (yes, MEed–an inside Piers joke) software–built to display text by interpreting in-line markup code–to create a transcription of one of the more important manuscripts, Trinity 3.14, or T. MEed itself had a bit of public history, getting written up in a local paper and then as a column in Omni magazine, the Wired of its time (before Wired became Peoplized after Chris Anderson left, sigh).

The T manuscript had been used as the base text for the Knott-Fowler edition of the A Version. Míceál, my dissertation director and mentor, had set me up with a photofacsimile of the T manuscript and had gotten me funding from the Graduate School Research Fund to spend the summer in a cabin on Fidalgo Island transcribing T, using Knott-Fowler to help me transcribe the text, which looked rather like this:

pierststart

While working on the transcription, I received a letter from George Kane, the famous (well, among Piers folks) English scholar who had produced in 1960 what he intended as the definitive critical edition of the A Text, destined to replace Knott-Fowler and demonstrate to everyone what the A Text really should have read. The letter was a warning, that I was violating his copyright in the A Text if I was using his edition as a basis for my transcription of the A Text. Why Professor Kane should care what I was doing, let alone know I was doing it was beyond me. In any case, I worked with David Fowler (Fowler in Knott-Fowler) and wasn’t about to depend on Kane’s squirrely A Text for anything.

Furthermore, there was this copyright puzzle with Kane’s edition. If it was good scholarship, then there wasn’t any copyright in the text. If it was bad scholarship but good fantasy, then there was.

As a critical edition, Kane’s A Text mostly transcribed words that were long in the public domain–Piers has been around for a good five hundred years. The parts that were Kane’s distinctive addition were his replacing of words in a base manuscript–ones he took to be scribal errors and corruptions–with words from other manuscripts and when those failed, with his own words, which he imagined (and argued) must have been what the Piers writer (who he called William Langland rather than, say, John Trevisa or unknown–authorship was one of the things in dispute, a scholarly tempest of sorts) had actually written. Kane claimed to have reconstructed Piers A as it was in its original manuscript. You see where this goes, with regard to copyright. If Kane was right about what he had done, he had produced a copy of a work long in the public domain, for which the exemplar (perhaps, oh, that nasty Z Text) has been lost. That is, if Kane did what he said he did, then he produced nothing original. He had restored a text that he didn’t write. No copyright there.

But if Kane was wrong, or being a deceptive sort of guy, or was creating a fun academic fantasy, then what Kane had done was originally creative–all those changed and added words collectively amounted to a derivative work bearing some originality–an original work of authorship in its own right, for which a new, shiny copyright had sprung into existence and was a way to ensure that no-one could copy his work without permission. Since I was doing a transcription of the T manuscript, the last thing I wanted to do was to incorporate Professor Kane’s academic fantasies–the parts that were his “original expression”–into my transcript. I wrote back assuring the good professor that I had no intention of ever copying his version of the A Text in place of creating a transcription of the T manuscript.

The episode, though, sparked a new interest in copyright. Kane had every right to a curatorial concern for his work. It was his effort, his scholarship, his creative expression. He ought to protect it with copyright, if he cared for it at all. That he would threaten his copyright rather than offer assistance was another thing, though. The dirty bird. But that’s the nature of scholarship–all fuzzy and hopeful in front of the public, but a good share of darkish selfishness and competition and slashes in the back behind the scenes. As Henry Kissinger once quipped (if I remember correctly), the reason academic disputes are so bitter is that so little is riding on their outcome.

The third event that led me to work on IP, policy, and innovation was when the wife of my best friend–also a good friend–got a job in UW’s newly revamped Office of Technology Transfer. The OTT had been created after the WRF and was built along the lines of a good OTT fronting for Research Corporation–walk the halls, find inventions, get them written up and sent off to an invention management organization for review. Except UW had arrangements with WRF, Research Corporation, and Battelle to manage inventions (later the Washington Technology Center would also get involved, but then everything blew up and UW was left with mostly with its OTT). But the main idea was to give the WRF first crack at any invention. WRF, in turn, had been modeled on Research Corporation all the way down to being established by an Act of Congress rather than incorporation under the laws of Washington state with a tax exemption from the IRS. The WRF was tax-exempt by law. That was pretty cool. And like Research Corporation, the WRF was selective–highly selective, and when short of money to file patents, very highly selective (UW had to loan it money at one point to keep it going).

The OTT was faced with inventions that the WRF chose not to handle. Faculty asked the good technology managers at the OTT to take on management for some of the rejected inventions. The managers did and found that they could license rejected inventions–companies would agree to license, would pay option fees that would fund the patent work, and the OTT could bootstrap its way along without much of a patent budget at all, working whatever the WRF rejected. All this got the WRF folks into a state of institutional paranoia–if they rejected inventions, then the OTT might license them and make the WRF’s decisions look bad, and every paying OTT license was money that the WRF wasn’t getting a share of.

Worse, the WRF took costs plus 40%, while UW took costs plus 15%. Some faculty wanted to work with OTT, then, because for the same license deal, there was more royalty to share coming from the OTT. When the deal might involve a known company–a grant sponsor, a research collaborator, a company that had hired the grad students from the lab, had shown up asking the faculty inventors for a license–there was no particular need to do a market assessment, write up non-confidential summaries, determine patentability, survey the industry for possible candidate licensees and devise a licensing strategy, justify a budget, get approval for work, and commence fussing around with writing up all the necessary documents to market, file, license, and manage an invention. One sat down with the company, worked out a deal, and signed. Though the OTT folks dutifully passed through every invention disclosure to the WRF, the WRF folks began to suspect that the OTT folks secretly counted on the WRF to reject most things, including things already lined up for OTT licensing.

Into this situation my friend Margaret landed as assistant director of the OTT. I was teaching at UW as an acting instructor (a one-time renewable one year appointment after graduating)–literature courses (Chaucer, Restoration Lit) and writing-intensive topics courses (Reading Literature, especially). One day as we were talking, Margaret complained about problems with OTT’s new Macintosh local area network (512s! one SE!). I came over to help out and got things working. The next summer I wasn’t teaching, and the OTT hired me part time to help with their computer support. One day, Don, the director, came into the room where I worked (a table in a corner of Margaret’s office) and plopped down a set of file folders. “Take a look at these and let me know what we should do with them.” They were disclosures of software–unpatentable stuff that the WRF didn’t want and weren’t the stuff that the OTT managers knew what to do with. Since I wrote software and had worked in a computer center, maybe I knew what to do.

I made some suggestions and one thing led to another and Don hired me for a year on a contract to study how to manage software. I sought out all the software management operations I could find nationally–at Berkeley, at Duke, at Wisconsin (do you remember WiscWare?) I read more open source and free software articles. I examined licenses of various sorts. I read university policies on copyright and software and patents. I talked to thirty or forty university software developers and computer science folks. From it all I wrote a report with my recommendations.

Software, I found, was not any single thing, but lots of different assets–source code, executables, algorithms, documentation, technical information, data, firmware, models, simulators, scripts, digital media. There were all sorts of programming languages, development environments, libraries, licensing requirements, and publication and distribution methods. Furthermore, software development projects varied widely in platform, purpose, collaborators, funding, distribution channels, and expectations. Thus, I recommended a program of “distributed distribution.” Rather than having IP and physical media extracted from software projects and managed by a central office, the OTT should help software projects with their IP and distribution choices. OTT should offer to join software projects and help out as it could rather than make software projects come to it and accept whatever procedure OTT had chosen.

There was no way, in essence, to create a uniform policy that covered even a small portion of the software activity taking place at the university. The policy that might work was one that adapted to each situation as it developed, brought in IP resources as needed to support that development, and left everything else well enough alone. The OTT worked for–and advised–those that led the software development projects. If those leaders wanted to do something that the OTT wasn’t authorized to do, we couldn’t do it–but short of things being illegal, we could make exceptions to policies (at times, for good reason), recommend folks who could do what was wanted and help transfer the work to those folks–publishers, distributors, other universities, the WRF, a startup, an established company, a nonprofit, whomever.

I later reduced this to a kind of mantra. We’ll help as you ask us to and as we think prudent. If you don’t like what we are doing, tell us and we will do one of three things: we will change our behavior, or we will explain our approach to your satisfaction, or we will get out of your way. That was our uniform policy.

Anyway, the UW folks apparently liked my work, but they rejected my recommendations–“We aren’t going to do that, Gerry.” But they asked me to stay on and manage software, digital media, and copyright for the OTT–provided that licensing income covered my salary in two years or so. So I had to come up to speed on copyright (in all its various media manifestations–code, books, music, internet, art, photographs, collective works, work for hire, derivative works….), patent (and its odd, developing application to software environments), trade secret (a staple of EULAs but not for us), management of technical data as an asset (rather than as a throw-in on the back of a patent license), trademarks (in various forms–including certification marks–but not university emblematics), open source, and all the concepts that swirl around authorship, academic freedom, and publication as software looks more like scholarship than like technology. Then I had to sort out the money parts of software projects–when might someone get paid, and for what, beyond writing code? How did software projects get funded, and how might they be supported by user communities, by companies, by government grants, by donations? And how did this support figure within university policies–on copyright, on inventions, on direct sales, on program income, on royalty sharing. When was income from the distribution of software a “royalty”? What direct costs were deductible off the top? If by OTT, then why not also by the project? We worked through all this and more.

Over the next ten years or so, we built a practice in software and digital media, with a team of exceptionally bright and dedicated folks who recognized that we each needed five to seven skills to do our jobs and we started with two or three and so needed each other just to get by while we learned what we needed to know. We also saw that university policies were divided up in ways that worked against software. Sponsored research or gift? Sale or license? Copyright or patent? “Traditional” scholarship or not “traditional” scholarship? Dull bureaucratic dichotomies. We had codes and related stuff, and people knew how to manage these–not counting how to sort them out relative to the unhelpful dichotomies formalized into university policies. It didn’t help that the university’s policies were a mess anyway–drafted by cantankerous committees with pleas for review and revision in five years, reviews which no one had bothered to do. We had to then create procedures that complied with policy or followed reviews and exceptions or extensions to policies, to get done what software developers needed to get done.

When I agreed to be hired for the job, I made it a condition that my purpose was to represent the needs of software developers to central administration, not to dictate to software developers what they had to do to conform to the dictates of central administration. The folks who hired me accepted that condition, though I’m still not sure they grasped fully what I meant. Their condition was simple–I needed to make my salary from 15% of the licensing income or they’d let me go. Or, about 8 times my salary in licensing income. Luckily, I was paid 15% less than the lowest paid technology manager at the OTT, so if I could just keep my salary low, I had a chance at keeping my job. Since I wasn’t sure I really wanted to do the software thing, but didn’t have any other teaching lined up, I figured if I stayed on course to help software developers, then either they would help me find the income for them–and keep me around–or I’d go back to teaching or programming. When I started my office file of reference literature as the new OTT “Software Technologies Manager,” the first document I put in my file was the GNU Manifesto.

We went on to have a remarkable run, from 1991 to 2002 and after, working with hundreds of software development projects–copyrights, patents, trademarks, data, support, distribution, publication, licensing, startups, nonprofits, consortia, grants, HHMI, open source, closed source, source available, WIPO arbitration, DMCA, TEACH Act, CREATE Act, and more. We created software course curriculum and copyright course curriculum for AUTM (back when I had hope for AUTM). We developed a year-long IP management certificate course with University Extension that ran for a number of years. Lots of talks and consulting with other universities. We ran a budget for the last six years or so of about $750K/year–that’s everything, salaries, space, ops, legal–and generated about $3M to $6M a year, mostly by providing software in ways that others were willing to support. That is, they paid because they wanted to pay, because they recognized that software didn’t come free, and often wasn’t supported by either grants or university funds. If they wanted code developed, they had to help us find the resources. Publishers paid because that was their operating model. Companies paid because they needed a way to support software without going through the awfulness called sponsored research contracting. After all, it wasn’t their project–there was no statement of work, and they already had their deliverable.

Anyway, we made more money on the margin than many patent-based tech transfer programs were making, including the OTT at UW, which we spun out of to become a parallel organization within a broader unit running parallel with the Office of Research. Institutional forces finally caught up with our program, a few years after I left for UC Santa Cruz and a new UW vice provost for intellectual property and technology transfer tried to brand everything under a single program and the next vice provost after that was too ignorant to understand the operation (but she had been a big donor to the university and knew the governor and promised that the university would make so much money on equity in startups that they would be able to change their financial model)–and so UW let her spend $100M over six years to reduce the number of startups per year by half at twice the budget and not enough income to cover the costs of the program. But hey, give a second-rate university booster a $100M budget to run amok without public accountability and walk away when the thing starts to visibly collapse, but don’t dare give that sort of funding to, say, a Lee Hood.

Since I started in this technology transfer thing, I’ve been something of an outsider to the issues around Bayh-Dole and patents and the idea that the only or best or primary or mandated way to provide the public with benefit was through capital-intensive patent speculation on the creation of commercial products. Sure, I could see the reasoning in the argument. But I didn’t see it in practice. At one point, Greg, OTT’s technology manager for engineering, went through all the licenses at OTT he could trace, and found only one or two that followed the public description–that invention reports led to patents led to marketing led to license led to royalties. Most of the time, it never happened like that. Yet the OTT–and most other tech transfer offices–never publish an account of what happens with the majority of inventions (and other stuff).

We found in our software practice that the best debrief question after a deal was, “How did we come to be this lucky?” We could see the part we played to improve our luck. But often deals happened for reasons beyond the imagination of any policy writer. Love affairs, thank-you cards to sales reps, random meetings, grant sponsorships (and how did those happen?), a librarian who lived next door to a company president, an articulate graduate student, a prescient staff member, referrals from weak ties, chance mentions. Most anything but the official line. Yet there was this constant pressure that somehow things were required to fit into the official line. Some administrators thought our job was to make policy look good. Policy was to be made to work, or at least to appear to work, or to be attributed as working whenever something did happen to work. “Faculty training” in technology transfer was, in that view, training to get faculty to follow the official policy on the premise that doing so would be more efficient or at least proper or a test of proper attitude. It was neither more efficient nor more effective. This observation led to our attitude regarding communication with software developers: “No bullshit, ever. If you don’t know, say you will find out. If you do know, you better have reasons for it. Never repeat truisms. Never tell people there is only one way when there are many. If someone thinks you aren’t capable, they may be right. But then they can either train us up or find someone else. We aren’t hired to fight with the people we work with, or to block them, or to make them obey. We work with the willing and ask the unwilling to at least show us how things should be done.”

At one point the chair of computer science wrote an article for the computer science web site–“To OTT or not to OTT.” Folks could go either way, he observed. OTT ain’t perfect–meaning, our software bit–but at least we were looking out for folks:

ottornot

Anyway, that’s how I got involved in the world of university licensing. I’ve spent the past eight years working to understand better the underpinnings–I figured out how to survive the system that’s in place, even beat it back at times, and get things done that work, however clumsily, using the policy and practice tools available. But I couldn’t figure out why the system was as it was–why does everyone talk fiction about Bayh-Dole? what motivates people to redraft university patent policies in such ghastly (even incompetent) fashion? what happened to create such an administrative industry around research? Why doesn’t that industry in practice work as it is advertised? Is it all–research will provide, patents will provide, university technology transfer will provide–failed prophecy? Or power play? Or clever scam? Or are folks so blinded or desperate or accepting or dunderheaded that they can’t think of anything else to do and so keep on doing the same stuff? Thus, I’ve aimed to chase down what I can, reason out what I can, and describe what I observe when it defies reason. Innovation happens in marvelous ways. Sometimes even by having bureaucrats touch their magical fingers to opportunity. But often innovation happens without any such fingerings at all.

There appear to be some basic questions. These have formed over time. It’s difficult to figure out what questions are worth asking. I will give a few here.

What is the role of institutions in dealing with research?

What role, if any, should patents have in the context of university work?

What place do monopoly and commons have in making university research findings available?

What happened when the federal government committed to support research at universities after World War 2?

What should federal research patent policy be?

How does research interface with innovation?

How can anyone help those who discover or realize or invent or develop something of value?

If a university’s values differ from that of a patent owner’s values, when a patent comes under university control, should the university adopt a patent owner’s values, or should the university require the use of the patent to conform to the university’s values?

Is values-thinking even a good way of thinking about opportunity?

I find answers forming–maybe provisional–to such questions. And sometimes I think these are examples of questions without good answers–ever. One purpose of Research Enterprise is to document my reasoning and my evidence. Another is to provide a perspective on the policies, practices, and claims that swirl about university IP management. There is plenty of directions to go, and disagreements will happen, but folks really ought to get their history and their policy and law claims right–or at least admit they are just making things up because it makes them feel good. Sure, in some way it is all social artiface–the laws, the policies, the contracts, the publications, the social expectations, the statements of values and principles and purposes.

But in another way, words have meaning, and documents evidence pathways, and laws and policies and contracts are meant to be followed by reason, by disciplined forms of interpretation. We ought to have an expectation that what we claim holds up under scrutiny–not just as confirmation biases and halo effects, not just because we might puff ourselves up as authorities. Things should check out. Things shouldn’t be misrepresented. Shouldn’t be fraud. Shouldn’t be malpractice. There may not be any primitive narrative–a single true account of how things have come to be–but so to there are things that are found to be true and things that are bullshit–that is, stuff said without a regard for the truth, said because the saying sounds good or signals obedience to the prevailing ideology.

There are ways that institutions can advance discovery, advance the diffusion of useful knowledge, advance the conditions under which creative and insightful work happens. The approach to do so starts with implementing protections for individual initiative. No master can force the servant to discover. Nor can any process that attempts to stand in place of a master. Not even the “scientific method” is a sure-fire way, or even a very effective way, to discovery and even less so to innovation or better living. As Paul Feyerabend argues in Against Method, “anything goes.” It may not be that an institution like a university can tolerate just anything–indeed, a university has a distinctive role because not everything goes within it. But deciding what will go and what won’t is an important matter, and there needs to be enough useful grammar to communicate the options. So, yes, there can be “technology transfer” supported by institutional resources. But mostly, things don’t happen as the official view describes, and it would come as some relief if the official view were relaxed enough to allow other approaches also to thrive.

That, too, is a purpose of Research Enterprise, to provide voice to alternatives–by poking holes in the official narrative, by pointing out practices that also have been productive, by searching history for alternative paths that have been obscured, by making a case for freedom in the face of increasing demands for obedience–in just the forum in which freedom combined with access to institutional-scale resources would make its most distinctive (and perhaps most productive) contributions.

This entry was posted in History, Technology Transfer and tagged , , , , . Bookmark the permalink.