The bozonet on mount stupid

I mentioned “mount stupid” last week in a three-part article on the University of Utah. Mount stupid is a bit of a meme derived from accounts of the Dunning-Kruger effect, which I have referred to in developing an account of bozonet formation. Kruger and Dunning found that individuals with little grasp of an area of knowledge or practice lacked the ability to judge their own ability relative to the ability of others. As they put in their rather famous article: “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” The less one knows, the more likely one is to overrate their ability.

Here’s the abstract from the paper (my emphasis):

People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

The mount stupid meme is found in a graph that depicts the idea behind this research. In a graph of confidence vs expertise, there’s a big hump of confidence (“mount stupid”) with low expertise, descending into a valley as expertise increases, and rising again with more than average expertise. 

This graph really doesn’t do justice to the effect. What Kruger and Dunning found was that less competent folk radically overestimated their ability relative to others. Somewhat competent folk still overestimated their relative ability, competent folk got things about right, and experts tended to underestimate where they stood relative to others. The graph looks more like this:

Folks in the first quartile of expertise (truly incompetent) had the greatest self-estimate of their skill relative to their actual (tested) skill. Things got better for quartile populations evidencing greater skill. Those with the greatest (tested) skill tended to underestimate their relative skill. Kruger and Dunning showed that even when showed the results of a given test, incompetent people persisted in their self-assessments while experts adjusted their assessments to be about what tests indicate was their relative competence. Kruger and Dunning also showed that when provided with training, less competent people grew more competent and thus lowered their error in their self-assessment. 

Either way one looks at it, however, the argument remains that the incompetent are prone to greatly overestimate their ability. When these people are placed in positions that require them to practice “what they know,” as Kruger and Dunning’s abstract puts it, they

reach erroneous conclusions and make unfortunate choices

Not only that, but

their incompetence robs them of the metacognitive ability to realize it.

This outcome reflects my experience over a two and half decades dealing with intellectual property and innovation in research enterprise. In the mid-1980s, the “profession” of “technology transfer” expanded rapidly as most every university hired people to “manage inventions” in “compliance with the Bayh-Dole Act.” The turnover of employees through university licensing offices is still high. There was no formal training one could get in “technology transfer,” and as far as I can tell, there still isn’t, though one can get certified as a “licensing professional.” That’s a start, but that doesn’t help with policy formation, either at the federal level or at universities. A law degree does not teach one how to participate in innovation activities. Nor does a PhD, and going through the motions of being an entrepreneur does not give one much understanding of how to create a standard or build relationships through institutional transactions.

University-based “technology transfer” involves having an aptitude for technology, understanding how academic research operates, knowing intellectual property law and practice, mastering drafting and negotiating and analyzing contracts, and having a firm grasp of business essentials–along with an ability to read, write, discuss, and respond to unexpected stuff. Oh yeah, one has to learn one’s institutional policies and administrative structure, know how to get waivers of policy, and have some idea about new venture mechanics and the challenges of introducing anything new into an established order. At one point I assembled a list of some 50+ systems that had to be managed to run a university licensing operation, ranging from how outside counsel was retained to managing disclosure of inventions to tracking signed agreements.

The point of all this is that technology transfer takes something on the order of a few thousand hours to get up to speed on–if one works diligently at getting up to speed. Most people come into the activity with maybe two skills and need five to be independent. That means they have at least two years, maybe more, to build up their basic skills. We aren’t even talking about Gladwell’s 10,000 hour hypothesis–whether that holds here or not. The problem is that often people never get there. Folks can get hired in technology transfer because they have “a business background” or “a law degree” or “a PhD in biotechnology.” Even with a PhD, moving from one lab to another–do it five times a day–makes for an impossible task.

The best people I have worked with in technology transfer have been quick learners, great observers of human nature and situations, and had the ability to draft for the relationships they wanted to create. And they took the time to come to understand five or more areas of skill. Even then, the skills they needed to navigate innovation they mostly already had–what they had to acquire were the skills to navigate all the roadblocks put up by institutions to deal with the prospect of innovation–and how the institution could profit from that innovation (or more usually efforts directed toward innovation) while avoiding any liability.

When a number of people rapidly enter an area of practice, they may form social networks among themselves. In these networks they compare their skill sets, and find that they are pretty much in the same situation–and they decide that they then must be about average, when they really have very little idea of what they are dealing with. And these folks give talks. Once, at a major public university holding a big to-do about technology transfer, a panel of attorneys for the university made a presentation about invention ownership and assignments. I asked a question from the audience about the impact of 35 USC 261 and none of the panel knew what I was talking about. I had to explain that this was the primary provision in federal patent law setting out the management of invention ownership and assignment. It wasn’t some obscure bit of law–it was the foundation of everything they were talking about in their presentation. It was clear that these folks had never practiced what they were talking about. They were just talking, blowing ideas around that sounded good to them, without any anchor in law. And they were the university’s IP attorneys.

In another instance, again at a major public university, I was trying to get an outside counsel appointment for a copyright attorney with specific expertise in open source. The university’s legal office fought me. I got referred to the attorney who was assigned to deal with copyright. “What do you want to know?” she asked, “I can look it up in my copyright handbook.” I could  look things up in handbooks. What I wanted was legal advice based on field experience with copyright in open source, and that’s what no one in the university’s vast legal office had. But they were more than willing to opine based on reading something or calling a friend.

And that’s where we get to with places like the University of Utah, or Oklahoma, or Texas, or Michigan, or Penn State, or Missouri or any of the other universities that have put out legal-sounding policy and guidance regarding technology transfer that is just plain full of mount stupid. Folks are haphazard, imprecise, sloppy, and mostly just clueless. But they have positions of authority and aren’t afraid to use their authority to keep people in their places. They get together and compare notes with other folks who are also haphazard, imprecise, sloppy, and clueless. A bozonet in which they all appear to be practicing with at least slightly above average skills, when they are largely incompetent.

It’s just that when we are talking about the output from $35 billion of federally supported research at universities each year, and we have universities that have by policy all but bottled up that output wherever it can be either owned or not owned (that “whether or not patentable”), then it would be well to get some people in who know at least enough to recognize that there’s a whole lot to know and not all that many folks have the skills to get at much of it. Technology transfer professionals may know a university’s preferred policy practices, but often they have no idea whatsoever whether those practices are the ones they should be using. In a sense, by knowing university policy practices, these folks know even less than inventors and investigators about what the next step ought to be in any given opportunity. They know the next step must be something the university requires–disclosure, assignment, logging stuff into a databased, creating a technology-available-for-licensing summary. These are answers to a question before it is formulated–and prevent asking questions that ought to be asked.

To make matters worse, new hires in the profession are told a fake history of university invention management, given a faux version of the Bayh-Dole Act, filled with a happy myth about federal government patent management before Bayh-Dole, and taught to repeat silly nonsense about the role of patents in “commercialization,” such as that “what’s available to all will be used by none” and “non-exclusive licensing is just a tax.” Worse, they are told to accept the bew plantation system created by the university patent brokers after Bayh-Dole–that the best academic researchers should give up their inventions to university patent brokers, so the inventions can be conveyed to companies under patent monopolies.

And so we arrive at mount stupid for university technology transfer. It’s a coarse name, yes. But then Richard Feynman got everyone worked up at a physics conference, after a bunch of papers on a topic, by asserting that the reason everything was so uncertain and complicated about the topic was that they all–including him–were just too stupid. People got up in a huff and walked out. The last thing properly credentialed people want is to feel that they really are stupid in the face of big, wild, crazy things–physics or innovation. If you work through the documents, as at the University of Utah, you can see, though, that folks really are just too unskilled, let us say. They throw claims around–they assert their claims are a matter of federal law–and it’s just not even close to accurate. Same at Emory about Bayh-Dole’s treatment of copyright (it doesn’t).  And they are, collectively, responsible for all the inventions (“whether or not patentable”) that $35 billion or so can generate each year.

One would think the practice would improve, would develop a degree of self-scrutiny, even harbor some doubts that its practices aren’t all that effective–but could be. I haven’t seen that sort of thing–certainly not at AUTM or APLU or AAU or COGR. Certainly not at most university technology transfer web sites. No one starts out with:

Research-based innovation is often a matter of luck and timing and often inventors have a better idea about that than the rest of us. There’s no one process sure to improve luck or identifies the next step to take or introduces you to someone with money and insight willing to work with you. We can help if you think we can improve your luck, help you with timing, or introduce you to someone who might represent that next step, and otherwise we can keep out of your way. We do have a budget to file patent applications, and we do have access to legal counsel (ahem), if either of those might matter. Oh, and if we handle the transactions, the university indemnifies you for anything that might go wrong with downstream development. But consider your options. Dealing with an institution can be a pain in the ass. No matter how pleasant we aim to be–that’s our experience, too!

Nothing like that. It’s always a big diagram depicting some version of the linear model of innovation, that research inventions are really just early stage products trying to get across the valley of death funding gap, for which your university licensing folks are just the folks to hitch your wagon to because, heck, they find a lucrative patent license once every two decades or so, if they are lucky and have timing. But it’s all depicted as a process that happens all the time, when it doesn’t, and it’s not a process–it’s just the imposition of administrative procedures on something unknown to transform it into something “manageable.” Thus, technology death through institutional efforts to own and manage.

It doesn’t have to be this way. Institutional support for research discovery could be a huge boon, just as Vannevar Bush envisioned for support from the federal government. But to get there, we have to get a whole lot smarter–first about the foundations of the practice and then about the limitations of university bureaucratic processes to do anything much for research-based innovation, practical application, commercial development, public benefit, more research funding–any of that, in whatever order one wants it. Until then, it’s going to be mount stupid most places. People will reach erroneous conclusions and make unfortunate choices, and won’t even have the metacognitive ability to realize it.

Even when Research Enterprise point out how awfully, horribly bad their policies and practices are–and demonstrate the incompetence–nothing changes. Just as the folks in the bottom quartile could not bring themselves to change their self-assessment when shown the overall test results, university administrators and their legal advisors don’t change. Maybe it’s pride, maybe it’s fear of liability. Maybe it’s the idea that if no one complains it’s all good. Maybe, though, it is mount stupid, and there’s nothing for it but incremental training.

This entry was posted in Bozonet, Technology Transfer and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *