I’m sort of fascinated by the academics doing surveys to ascertain technology transfer practice. They don’t actually sully themselves by observing practice–that would take too long, be expensive, and would compromise some degree of (what to call it?)–oh–innocence–perspective. Survey is a lousy proxy for observation of practice.
The unknowledgeable observer isn’t a particularly good observer of practice. And if the observer has some insight, what’s the point of a survey instrument? It’s like being a food critic, but only surveying other eaters. Or, no, it’s like saying, I won’t even *see* the food, even though I could, but rather I will form an opinion about it *from surveys*, and from that I will create policy recommendations pertaining to *cooks*.
At the very least, before survey results are accepted as anything at all, there ought to be at least one additional effort to replicate the claimed results, or to critique the method, or the analysis. And even then, it’s just a questionnaire! No telling whether folks who don’t practice even know what questions to ask. How does one cut through what people are programmed to say, what they say but don’t do, what they think sounds the best of the choices they are given, or what they will do next, regardless of the past?
And *even then* who cares what the academics come up with? It’s not a deliberative rhetoric. It’s a forensic one. And at that, it is about what those involved in technology transfer say about their activity when responding to a survey. It takes something else to make the connection from what people say responding to surveys to what they practice.
In that, one might argue, one cannot reason from a survey to observation, but one might reason from observation to a survey. This, perhaps, is the Nomothetic Fallacy: that a survey of technology transfer evaluates the pre-conceived categories of the survey relative to statements of practice, but does not necessarily reveal any of the social, business, or deliberative elements of that practice.
Unfortunately, it is the academics that are first in line with their survey results to attempt to influence public policy on technology transfer. It is as if their results are more authoritative, for the heft of publication in journals, than practice itself. Are academics doing surveys speakers for any part of technology transfer practice? Should innovation/science/IP policy follow their work without first an effort to verify or replicate their claims?
At some point, in practicing arts, the insights that matter are with those involved. And there, summarizing frequency of answers, from a pool selected however–randomly or otherwise–just doesn’t matter. We interviewed potential football players for how to run a 2:00 minute offense. We interviewed many more than who actually have played football, because you know, playing football can cloud your judgment. Or, we interviewed a few great football players and coaches, but then threw in a whole lot of other football players and coaches, second and third string, anyone who ever offered advice. Oh, well, we just surveyed star players, the celebrities. The ones famous for scoring in the last 2:00 minutes.
Who does one listen to? Where does new practice arise? What confirms present practice as useful?