Alasdair McIntyre in After Virtue presents two contrasting arguments. Shortened up and re-cast slightly, they are:
(a) Justice demands that every citizen should enjoy equal opportunity to develop his or her talents. But that requires equal access to health care and education. That means no citizen should be able to buy an unfair share of those services, as might be offered on an open market. Thus, government should provide medical and educational services, and private providers should be abolished as unjust.
(b) Everyone has a right to incur only those obligations he or she desires. Doctors must be free to practice as they wish, and patients must be free to choose their health care. Teachers, likewise, must be free to teach as they choose; students, free to choose how they will learn. Freedom requires the abolition of restraints on private decisions imposed by licensing and regulation, whether by universities, trade associations, or government.
McIntyre observes that the first argument is based in equality, while the second is based on liberty. His point is that the two arguments are incommensurate–they talk past each other. The vocabulary for discussing “ought” has come unhinged, what McIntyre calls a symptom of “moral disorder.” He means by “moral disorder” not that we are living dissolute lives, but rather that our language fails to create an foundation for discourse in which such discussions might resolve. McIntyre identifies “emotivism” as one source of disorder–“we simultaneously and inconsistently treat moral argument as an exercise of our rational powers and as mere expressive assertion” (11). Is what we ought to do a matter of reasoning or preference? Why should it come to such a dichotomy? What is wrong with our reasoning if does not lead reliably to preferences? And what good is reasoning if it is merely cover for what our urges dictate?
Policy discussions are concerned with what we ought to do, and therefore are at their root moral discussions. What is “good” for society, for organizations, for innovation? Often these are cast, in the world of university research, in economic terms, about money and “value generation” and “economic development.” Such language obscures but does not remove the moral foundation of the discussion. The obscurity, however, makes it appear that innovation, and research, and scholarship, and instruction are about, really, money and not much else. Technology transfer is about “new sources of revenue” and “commercialization” and “private investment.” These are abstractions for which metrics might serve as proxies: money from licensing, money invested in new companies, money received from research sponsors, jobs created by this money, or better, an estimate of the jobs that could have been created by this money had it been used to pay for more people to work rather than go into bank or brokerage accounts. One can get up in refining the details of such discussions–how to find better metrics, how to identify best practices in stimulating local economies. It’s all abstractions managing abstractions. But once you buy in to the idea, it’s attractive to keep going, if for no other reason than that there’s plenty of company on the road working the same way, and there’s comfort, if not a career in doing what everyone else is doing.
In short, a common answer to a policy “ought” discussion is that we ought to be doing more of what it appears people are already doing, where the money is, or at least the smart money, which is the continuing or generally largest dollop of money available. If there was a lot of investment in biotech in the 1980s, then, well, universities should focus on getting a share of the action by promoting biotech, and governments should give special benefits to biotech firms, and tax breaks to biotech investors, so there will be even more biotech money for everyone to scratch the ground for. This sort of policy “ought” might linger for decades after its sun has set, put forward as “best practices.”
It would appear that going any other direction would be against “best practices” or “not on the same page” or “working at cross-purposes” to the majority or the leaders or whomever is the current alpha lemming. Consensus, in this sort of context, has less to do with agreement and more to do with preventing anyone from exploring alternatives, something Joan Roelofs discusses in Foundations and Public Policy: The Mask of Pluralism.
For innovation, the interplay between the new (change, difference from the status quo) and imitation (adoption, diffusion of change) makes for a continual pattern of surges and responses. New stuff that no one adopts leads to the sound of silence, so to speak, where people invent things that no one ever practices. A more sophisticated way of putting it is “tear-water tea.” Without imitation, there is not a lot of “innovation” in something new. David Teece discusses this tension as well, in “Profiting from Technological Innovation.” Innovators need imitators–but they want the imitation on profitable terms. Those providing infrastructure would just as soon have the imitation on any terms that creates a robust market for related goods and services. The interactions among these three groups establish who will end up with a share of the money. While university administrators may wish to characterize their involvement in research discovery as “innovator” work, they are actually mostly infrastructure. If they figured this out, they would have many opportunities for “new sources of revenue.”
Innovation policy discussions, too, appear to have a moral disorder to them. I have heard university administrators argue for a strongly regulated environment. The university must own all creative work (except worthless “traditional” scholarly works), and everyone must participate in the process, to ensure equal treatment. I have heard administrators even argue it would be unfair to those forced to participate in the university’s technology transfer program if any were allowed to make their own way. In this formulation, academic freedom means only that freedom which is “academic”–in that slant meaning of “academic” which suggests “without particular relevance.” Any sort of freedom that would mean something is to be prevented. That, in short, is the purpose of most university patent policies as they are presently configured. While the policies all aspire to good things in society–foremost among these lots of money for administrative use–they have little sense of whether their claimed programs “work” or what the consequences of destroying real scholarly and personal freedom might be.
In “Business Ethics: The Law of Rules,” the late Michael L. Michael looks at the effect of rules and incentives in business situations. He points to studies that show that when rules are imposed on a group, they tend to externalize their motivation. When rules are interposed, individuals tend to shift their attention from their own ethical reasoning to that of considering the effect of the rule. In other words, the rule (and its possible sanctions) tends to hollow out our ethical reasoning. The University of California ethics policy, demanding that no one be allowed to do something for a “higher purpose” than policy allows, is perhaps the epitome of the problem. Michael points out that rules of these sorts are invariably “under-inclusive” (failing to cover all cases) or “over-inclusive” (demanding more or other than is useful). If attention shifts to whether one is “complying with the rule” rather than the purpose of the rule in the first place, then the effect of the rule, in all of its over or under inclusion, is to force behavior to conform to the rule, regardless of the activity that otherwise is the reason that the rule was formulated in the first place. The rule comes to be more important, ethically, than the activity it would guide.
Something similar happens with incentives. When elderly residents at a managed care facility were given tokens for stuff at the central store for doing various personal chores on their own, they stopped doing all the work that wasn’t rewarded with tokens. External rewards are no better than external sanctions in shaping behavior: both hollow out our ethical reasoning, our sense of what ought to be done, what will make for the successful execution of one’s responsibilities.
Look, then, at how university patent policies attempt to engage faculty and students. On the one hand, they demand compliance: no invention can go unreported to administrators; no invention can be owned by the individuals that make it; no invention can be deployed without the formal, written approval of an administrator. Failure to comply can lead to discipline, to accusations of unethical behavior, to dismissal, to the destruction of one’s reputation and career. We would not like that, would we? So be nice, comply.
On the other hand, the policies propose incentives. Making money from licensing is depicted as attractive. The university, having taking ownership of inventions, then generously offers to share income with inventors, if only they cooperate with the program. The incentive is substituted for any number of other things: payment of a royalty share in any normal environment would be consideration for the negotiated assignment of title to the invention. If inventors pursued development of their inventions without bureaucratic involvement, the incentives might take non-monetary forms, such as widespread use, or opportunities for recognition, or forms of indirect support, such as grant funding or requests for paid consulting. The incentive of the “royalty share” is another hollowing out of possibility, of choice, of autonomy, and therefore of ethical reasoning. The royalty-share incentive demands efforts to make money, and make that money by licensing, and that means taking a patent position, and threatening anyone who might practice the invention without paying. At first, the threats are made in the hopes of attracting a single investor willing to pay a significant fee to hold a monopoly position. When that effort fails, as it almost always does, after two or three or five years, the threats turn to those who might be practicing the invention anyway. The idea is, catch them infringing and make them pay for doing so. Use becomes a tort rather than a treat. Successful imitation is turned into a shakedown of industry. The purpose of the incentive built into policy, sweet sounding as it is, is to demand inventor acquiescence in such institutional money-seeking. The money, and the patent advantage, are more important than the research, the publication, the teaching, and the uptake. How dominates over if, and who benefits dominates over public service and responsibility. Worse, rules and rule-like incentives hollow out faculty, students, visitors, and staff alike, and turn their attention to service to the rules, over-inclusive as they usually are, seeking to control everything that might be valuable, willing to release what becomes worthless, and only then typically under duress. (Why release the worthless? If it is worthless, then it doesn’t matter. If someone wants it released, then it is not worthless, and someone should pay for it. The logic is as crisp as it is blind.)
Can innovation take place in an environment that has been ethically hollowed out by rules and incentives imposed on creative individuals? If so, is this the sort of society that we would like to create and participate in? That is, is this a view of a hollowed-out society in which individuals follow the rules and respond to the incentives prepared for them by bureaucrats, or administrators, or “leaders.” In such a society, process is king, and rules and incentives are administrative tools. The future is successful because the rules and incentives punish anyone who does not contribute to making the future successful, and if there is a smacking possibility of failure, then one should work to make the future look successful anyway. There are no reports offered by university technology transfer offices that their patent policies are ineffectual, or that creative folks are hollowed out and bitter, or that those that would readily adopt are put off, and distraught by the institutionalized rules about technology transfer transactions.
There are alternatives, but they involve a reduction in rules and incentives. That in turn requires a cultivation of ethical reasoning. Machiavelli, that master speaker of the way things really are, in a defective world where folks tend to be better controlled by fear than by love, argues in The Prince that people used to the rule of a prince do not know what to do with freedom and can be readily defeated by the imposition of a new authority, even a cruel one, so lone as the cruelty is contained, preferably brief, and the new order is not hated. People, Machiavelli argues, just don’t want to be oppressed. If a research university were to lift its rules and incentives in its patent policy, after thirty some years of hollowing-out and oppression, it might well take some time for creative folk to learn how to handle their new freedom, to work out their own ethical framework for science, for scholarship, for the relationship of instructor to student, or instructor to industry, or instructor to investor. But it may well be that, if one wants to see the surges of spark to fire, of invention to innovation, of discovery to adoption, one has to shift vocabulary from that of regulation and money to something rather different, even incommensurate, that starts with freedom and offers suggestions to be tested, and not either rules or incentives imposed by policy.