Why is innovation in innovation management so difficult? One explanation that has been developing in my mind involves what I call the “bozonet.” A bozonet is a largely inexpert social network incapable of distinguishing expertise from non-expertise. A bozonet is unsure of the future as a consequence of lacking experience with which to anticipate that future. A bozonet represents itself as standard practice based on ubiquity of like seeming practice, and is ready to claim positions of authority and prestige though ill suited for them, using organized appearances and plausible deniability to shift attention from what doesn’t work. A bozonet is keenly sensitive to social nuance, defends its own dignity, attacks its critics, depends on but does not necessarily acknowledge capable folks doing actual work, and works to prevent change that would fall outside its established comfort zone. When faced with making changes, a bozonet tends to make awful ones, but doesn’t know it. If a bozonet wins out, it becomes a norm, the way things just are. (One might pause here to think what this does to the idea of progress.)
We humans are susceptible to many social and cognitive faults, and we don’t leave them at the door when we go to work. A bozonet is something of a natural social form that frequents organizations, communities, and other arenas in which social networks form and manage civic life. I’ll try here to point to some reasons why a bozonet might form, and suggest that there may be good science to can help to explain what is going on.
This is a big topic and I intend here only to sketch it out. World Wide Words has a nice discussion of the origins and use of “bozo.”
A bozonet, then might be taken to be a group of such folks, interacting in some way, to achieve an effect beyond that of any single participant, and not realize they are being foolish. If people participate in social networks as one of those things we just do, then it also stands to reason that some of these networks also include folks that, with regard to particular areas of expertise and ability, know less and can do less than others. But it’s very human, and I’d say it’s something we all participate in, one way or another.
But there is more to it, and though there’s some of it intended to be toward jest, there’s also a bit that’s quite serious.
Consider this article, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”, by Justin Kruger and David Dunning. Across a number of tests, people clearly in the bottom 10 percentile thought themselves, on average, to be 60th percentile, while people who were very capable were also very capable of judging their skills accurately. This stands to reason. If you don’t know something well, it’s hard to know how much or little you do know. May as well assume there’s not all that much to know, and what there is can be learned pretty easily, and that’s that. In some ways, it’s a sign of being a really good judge of one’s own skills to be willing to be vulnerable and not appear better than one is. When it comes to professional conduct, however, where folks are relying on you to be as good as you claim, it’s another thing altogether.
It’s not just everyone that is willing appear as they are, given the chance for some spin. Think of the bits of misrepresentation that is reported to go on with resumes, and perhaps that go on much more frequently on the biography sections of social media sites.
Malcom Gladwell, in Outliers, (here’s a summary with comment) makes the point that it seems to take somewhere around 10,000 hours of contact time to gain mastery of an area of skill. In workplace terms, that’s like five years of practice. Gladwell is out to map explanations of success arising from odd combinations of circumstances, in an effort to challenge the genius and power versions that tend to circulate. What results, perhaps, is the idea that luck has structure, and so does, often, success. To develop skill in an area, it appears to take more than memorizing buzz words and knowing who’s who (which I call buzzhorpal). There has to be experience–real contact time with the world–underlying the work.
So far, so good. Let’s add a couple more things from neuroscience. One has to do with recent discoveries involving the relationship between our personal pasts and our ability to imagine our personal futures. Work by Daniel L Schacter and Donna Rose Addis (Harvard), for instance. What these studies are showing is that our declarative memory (memory of our past, and memory of facts) is tied up with the same neural systems that manage our imagination of the future. See The Memory Lab for more information.
One might say we imagine our pasts and remember our futures. That is, we construct from pieces of our experience the things that we frame up as personal intentions, or simulations, or possible alternatives, in our futures. In one report, a person who had suffered a brain injury was able to go to work and do his job perfectly well, but was unable to recall having done so in the past. Furthermore, he was unable to say what he was going to do the next day. The parts of the brain that handle the personal memories also handle a bunch of the future intending. For this, it is worth distinguishing what we are able to intend to do, as distinct from mocking up something that would look good in a powerpoint slide deck, full of buzzhorpal.
Schacter and Addis in “Concept Constructive memory: The ghosts of past and future,” an essay published in Nature (paywalled, sorry), suggest that memory function consisting of “piecing together bits of the past may be better suited to simulating future events than one that is a store of perfect records.” For our purposes in explaining the bozonet, it’s enough to note that if one doesn’t have sufficient experience, then substituting factoids one has picked up may not be adequate–but it’s possible that this won’t matter if those you are presenting to are in the same condition, since they won’t be able to tell the difference. In fact, sitting in meetings exchanging opinions may become a primary experience. If you don’t have the experience–directly, or vicariously through training, books, mentoring, practice–then how can you imagine any personal future you are willing to attempt?
Another aspect of memory and learning goes with this. Schacter, in Seven Sins of Memory (here’s a review), identifies a number of ways that we make mistakes with our memory. We all have these problems–forgetfulness, absent-mindedness, blocking stuff out, or being unable to forget something worth forgetting. One of these “sins” is particularly relevant here, for bozonet formation. Bias involves tending to make memories conform with current conditions. What we recall tends toward fitting in with our surroundings. Perhaps the Stockholm syndrome is somehow related, in which kidnapping victims tend to take on the cause of the kidnappers in an effort to win their release. If one’s professional standing depends on knowing something, and all you have is a heuristic model of how innovation works, such as the Linear Model, or the “technology transfer process”, then perhaps one feels a dependency on that model, because that’s all one has. If one hears repeatedly how technology transfer is supposed to work, then the texture of one’s own memories may tend toward conforming with this repetition. In learning, the right repetition matters.
I’m inclined to include under bias a desire for self-consistency of one’s expertise. Where we encounter complex stuff that doesn’t fit in, we tend to have difficulty accommodating it all at once, and substitute simplifying patches for things we don’t understand. The idea is, the patches make things consistent and hold the other things in place until one has the opportunity to work through the complexity in detail and figure it all out. But what if one never gets around to doing that? What if the patches sound good–because that’s what a simplifying patch does–so that an explanation with the patches sounds not only rational but even attractive. Simplicity over complexity. Works for scientific explanations (sometimes), but when a simplistic statement substitutes for the depth of what’s there, then it also may be a barrier to competence. How can one experience what one has come to believe doesn’t exist? Sadly, it appears to be incredibly easy to be incompetent and not know it, when it comes to complex areas of work. Our minds are set up to adapt for surviving, not necessarily for getting things right all the time.
Misremembering in learning environments also figures here. I worked for a time with an expert instructor in violin. I wasn’t learning violin–I’m a hopeless untaught guitar player–but was learning a bit about practicing. His point was that if a student was any good at an instrument, the teacher couldn’t afford to let the student practice alone. Making mistakes in practice would be a disaster, much worse than making a mistake in performance. Later, I realized why music teachers didn’t necessarily go to their students’ performances–their work was in the practicing not the play. All this makes one wonder about the role of “homework” in grade school–maybe that’s all bass ackwards, too, if the practicing is done in private and got wrong.
Just to make sure things are good and piled up on the stack, we need to include work done by neuroeconomist Greg Berns. I’m thinking especially of recent work that indicates that the presence of experts giving advice significantly changes the brain’s response to assessments of risk. Here’s a summary. Says Berns, “This study indicates that the brain relinquishes responsibility when a trusted authority provides expertise. The problem with this tendency is that it can work to a person’s detriment if the trusted source turns out to be incompetent or corrupt.” There’s a painful thought. If a bozonet sets up as expert, what hope is there for non-experts grappling with a ton of buzzhorpal?
Perhaps this is enough for now.
Pingback: University Brand "Commercialization" | Research Enterprise
Pingback: Hmpf Issue Vol. 1 | Research Enterprise