The Paradise of University Rhetoric About Science and Innovation

Ian Sample, writing for The Guardian’s ShortcutsBlog, describes how MIT grad students in 2005 created a “fake science report” generator that produced bogus scientific articles for presentation at conferences. Now anyone can download the generator:

But this is the hoax that keeps on giving. The creators of the automatic nonsense generator, Jeremy Stribling, Dan Aguayo and Maxwell Krohn, have made the SCIgen program free to download. And scientists have been using it in their droves. This week, Nature reported, French researcher Cyril Labbé revealed that 16 gobbledegook papers created by SCIgen had been used by German academic publisher Springer. More than 100 more fake SCIgen papers were published by the US Institute of Electrical and Electronic Engineers (IEEE). Both organisations have now taken steps to remove the papers.

Maxim Lott at Fox News follows up with a further account.

It may be that there is “intense pressure” to publish, but where is the “intense pressure” to get it right? to contribute to the advance of knowledge rather than to one’s own career and political position within a university culture?  One can rush to judgment and argue that the success of these hoaxes indicates that the peer review system is flawed. But even if the peer review system is flawed (and surely it is), there is more to it: there is too much money chasing academics for their published output and their travel budgets. I also wonder if there is also simply too much uncaring money chasing research activities. That is, at some point, if a sponsor such as the federal government does not pay attention to outcomes, then it provides uncaring money, and attracts participants that are, well, fine with that. If there is “intense pressure” to publish, then who is it that is bringing this pressure, and why does publication count more than insight?

If there is such intense pressure to publish, how come negative results are not also published? Richard Feynman, for one, argued that negative results–and all results–should be published:

In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.

Studies also might be published that replicate (or fail to replicate) other studies. One might also expect much more access to metadata and bespoke software tools if academic scientists (and their institutions) were really about contribution to science and not about positioning themselves for the next big government grant. With the internet, making public one’s studies is relatively easy. The problem is establishing what sort of trust should be placed in these studies. At present, the answer has to be “very little.”

Even selectively publishing positive results is a kind of fraud on the science community. Worse, it is a huge insult to the engineering and development communities where company and government engineers and technicians attempt to rely on published results. Or, given how much of academic published science is flawed–flawed analysis, flawed data collection, flawed experimental design, flawed reporting, fudged results, fabricated results–maybe industry and government engineers and technicians are quietly not using academic publications to do their work. Perhaps publication is just for show, because engineers cannot afford to read that stuff.

Consider this observation from a recent article in The Economist on the problem of bad science (my bold):

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

That’s the technology transfer problem in a nutshell. Think about the propagation and amplification of suspect findings, not just to the public but among academic scientists, who may cite articles without waiting for confirmation that results have been replicated.  Five non-replicable studies cited together may make another twenty being proposed for funding seem plausible enough–it may even look like “scientific progress” to study further variations on something that doesn’t necessarily exist, but with the appropriate inattention to statistical thinking, can still appear to exist long enough to beat out others for the next government grant. Cynical? If so, one has to aim that cynicism at the NIH for publishing such ridiculous figures (and The Economist for repeating them), or one has to face up to the extent of the problem. Either way, an institution’s integrity is fading.

One might decide that everyone should just be more “ethical.”  Yes, if we could just swap SIM cards in our brains, everyone might be programmed for an ideal world. But we don’t have that, so let’s leave magical spiritual transformations and government-directed “ethics training” aside for a moment. The question that needs to be asked is this one:

Would the practice of science be better off with less government money swamping out directions for research, commitment to contribute to science, and private efforts to develop that science?

I can hear the “Oh no, we cannot raise this issue” whispers immediately. More government money is necessary to cover the costs of issuing bonds to pay for the expansion of facilities anticipated to be needed to compete for yet more government funding. Even stopping the growth in government funding will tilt the universities toward default, unless they can extract that money from hospital fees charged to the government, or from students, in the form of ever higher tuition (funny, instructional faculty salaries are not going up at nearly the same rate).

The question is really about how one confronts in a community an attitude that is more interested in the value of having access to the resources of the community than in contributing to its efforts. Of course, there is always a component that pays tribute to contributing–“look, I published”–but there is something else to get at, something having to do with commitment and honesty, something that we cannot inspect in individuals, but may observe from their actions, but only if we are paying attention. The problem may be sufficiently deep that the ones who would be in the best position to observe, aren’t. As Feynman puts it

The first principle is that you must not fool yourself–and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.

Let’s shift that focus from the scientists to administrators. It may be that university administrators and government grant administrators are all too ready to persist in a bubble of plausible deniability, a kind of “bullshitter’s paradise” in which the intense pressure to produce a public rhetoric of innovation and economic development encourages never paying close attention to the conduct of science, never actually caring about anything other than the money and the publications (which are the things reported, if you notice–inventions are kept secret, and retractions of publications or demonstrations that prior reports were misleading, flawed, or not replicable are not published with the same fanfare, in the same reporting channels, such as by press releases issued by the institution hosting the research). Universities have become advocates for their own interests, not for public science. The change is reflected in subtle ways, such as the gutting of accountability in the Bayh-Dole Act, encouraging university administrations to align themselves with money-making from scientific work by taking ownership of faculty work, withholding that work from public use (which might including testing and validation in industry), and using the patent system to run up the “value” of such withholding, only to find that the only takers of the resulting deals on offer are speculators interested in running up the value even more, freed of the qualms of university administrators.

When one examines a policy-based practice, one has to do full-path testing–not just recite the public story of best case sincerity, but rather the full range of pathways accepted by practice under the policy. The sincere story of university patenting of faculty research is that doing so is a public benefit that takes the form of inducing private industry to invest in work that otherwise would “sit on the shelf” for lack of a monopoly incentive to develop the findings into commercial products. That story is a nice one–who can argue? But it is not the only story in play, and it is not the story that represents the mode, or even the average chain of events–it is a story of a rare event, the under 1% event. But it is presented to stand for the whole range of other stories in the same category–stories of patents filed on inventions that weren’t patentable, on inventions that failed, on inventions that no one wanted, on inventions that were licensed to companies that failed to develop them, or were trying to develop them when they themselves failed; stories of disputes over terms of industry research contracts, over licensing terms for patents, over restructuring those terms after a deal is signed, over university over-billing for legal services, over reporting of sales and royalties, over scope of license, over sublicensing, over failure to perform, over gawd knows what. None of this is reported in annual reports issued by universities unless it involves money out of the ordinary–a big judgment for or against. In short, the aspiration that each and every invention has a shot at becoming a consumer-beneficial product cannot stand for the reality of practice. Yet it does.

The public relations offices at universities practice a selective pattern of publishing. They prefer to publish “positive” news with high drama or “potential” and ignore all ordinary events in science, including negative results and failure to replicate results, which should be among the most ordinary and helpful of reporting. What is newsworthy about normal? In essence, the public relations offices at universities, willing to promote innovation potential and economic impact, are among the least honest bits in the whole system of academic research. Their press releases are dishonest (in Feynman’s sense) and it is those press releases that are read by the public, by legislators, by policy makers. In this regard, “make of it as you may” is also dishonest. Caveat lector is not an appropriate motto for a public institution claiming a reputation based on trust and commitment to “knowledge.”

But the word “dishonest” is not quite right. The press offices no doubt want to get things right, to portray what they choose to portray accurately. But then consider this case. The University of Washington’s Center for Commercialization, under intense pressure to show that in five years and about $100m of spending focused on startups was actually doing something, announced through an official UW press release that it had started 17 companies in one fiscal year under a signature program of UW’s president, Michael Young. Public information showed that only four of the named companies could have been started in the year, and two of those were simply repackaging long-developed administrative software to look like a company. Of the rest, some were not even companies, some were started at other universities and organizations, some were started years ago and perhaps should have been counted in some previous year’s activity. Given that six years ago, UW was starting about ten companies a year, it would appear that C4C’s efforts had resulted in a net decrease in startups of about 50%, for a doubling of C4C’s administrative budget. But the question is, why would not the UW’s “news and information service” respond when notified of the problem? There was not even a “thank you for your information, we will look into it.”

At the University of Utah, the school ran an even grander program of creating paper startups, advertising these new companies to the state legislature as evidence of economic potential and great value to the citizens of the state, which triggered among other things a $93m allocation to expand the university’s research footprint, to enable all that potential to be realized. In the end, a unit at the university charged with doing the reporting could find only four new companies and 13 employees and expressly disclaimed that the study was about economic “impact”–but only economic “contribution.” The distinction means: the report was only concerned with the impact of spending the $93m: there was no meaningful impact from the outcomes enabled by that spending. Yet the university’s press office turned around and called it “economic impact” anyway.  Last fall, a state audit reported that the university had inflated its figures, blaming a “revolving door” of financial managers for “bad data.” Perhaps the managers did not feel comfortable living that sort of life. In late January of this year, Utah state senators in a hearing spoke of a “culture of untruth and lies” and “total and complete fabrication.”

Even this November 2012 slide deck (link is now to a 2013 version) from a Utah senior administrator skips over the realities by hiding what might be meaningful data in aggregations and claiming as “economic impact” what appears to be no more than contribution. The talk is titled “Demonstrating the Value of the University as a Business and Innovation Driver.” The talk shows how to assert that value, but the talk does not demonstrate the value. The word “demonstrate” here means “promote without evidence.” I am not saying that the senior administrator was part of the culture of lies: I am suggesting, however, that when the lies flatter, it is easy to fool oneself.

The slides move from a topology of economic development through research programs, to extra state funding that rolls in for them, to a stack of assessment measures, to a “summary” (slide 8 ) that shows everything on target (with adjusted goals to ensure targets can be shown as being met)–except that the outcomes reported do not show impact, but only aggregated figures. Faculty doing research have now been added to the “USTAR” program, so there is no report of the marginal change in their activity that has led to new funding that would not have been obtained anyway.

The following slide, labeled “economic impacts” appears to report “economic contributions”–the effect of spending state money on research, not the effect of the outcomes of that research on anyone. Even starting nine companies, as reported, does not mean there is any economic impact for doing so, especially if the companies are shell companies built on a premise of potential, funded by yet more government money. But how would we know, as all we get is a number, with the idea left that we are to imagine companies with venture investment, busily building and selling product, hiring Utah citizens into dozens if not scores of new high-paying technology jobs. But in the full path analysis of the stories the figure of “9” might represent also includes paper companies that have no funding or operations, companies that were started and now are moribund, companies that exist to grab some SBIR funding and then will burn out after Phase I, or perhaps after Phase II, and companies that have moved out of state, which may be fine for the companies but not the for economic development being pitched to the citizens of the state of Utah. In short, without better information–even short of utter honesty–how are we to know whether nine companies means “yet more expansion of activity for which the state will have to provide subsidies” or “nine new product lines blossoming with substantial private investment, transforming Utah’s economy with new jobs,  new revenue, new benefits”? Caveat lector.

The slides then turn from figures of activity to pictures of research buildings constructed with state funding, an outreach program to show everyone how wonderful the buildings are, and a few slides showing how the program has been nationally recognized.  The university is clearly a “driver” of its own self-interest in a narrative about the desirability of innovation from research and companies that may take advantage of that research. Beyond that, the evidence withers, but readers are left with the unmistakable impression that the program is a roaring success. And if getting new buildings from the state is a success, there are pictures to prove it. For the rest, the claim of economic development and prosperity arising from the results of University of Utah research enabled by this state investment, there is no information. Readers are apparently invited to create a plausible story of potential and hope for themselves.

This is the fool’s paradise of not having sufficient regard for the truth to report it completely. Or put another way, to paraphrase Feynman, of being committed to promoting institutional aspirations rather than seeing to it that the university “gives all the information to help others judge the value of its contribution, not just the information that leads to judgment in one particular direction.”

Perhaps there is too much money and status involved for administrators to be “honest in a conventional way.” The scams do not have to be as big as the ones at Washington and Utah–though I expect there are even bigger ones around if one wants to dig into it. It is up to state legislators and investigative reporters such as Darwin BondGraham to get at the issues and ask for public accountability. But the big scandals that might be are nothing compared to daily, routine university administrative self-promotion, diverting funds from other initiatives that may well promote innovation and economic vitality.

Administrators are deeply conflicted. They want the money and status that comes with research funding. They have good reasons for wanting “new revenues” and to move up in the rankings of research universities. They are not focused, however, on the integrity of the academic environment–that is something that seems to have vanished in practice. Administrators appear to be focused on the money, on “development,” on “organizational entrepreneurship.” They do not even think to encourage scientists to make everything public. It is not their job, as they see it. There is an entirely separate administrative pathway for what counts as “scientific misconduct.” Making everything available is expensive, and bothersome, and abandons competitive positions, and exposes published claims to detailed critique.

If you are in university technology transfer, or research administration, or community outreach, or economic development, or news and information, you have to take a hard look at yourself and ask, what sort of business have I come to be in? It may well not be the business that existed when you started, and it may not have been the part of the business that recruited you and brought you in at first. But there comes a point when you realize that there is no other component operating as accountability on all those billions of dollars a year of government-funded, university-hosted research. Not the universities, not the peer review system, not the government agencies, not the press, not the state legislatures, not the legal system. All slumbering peacefully with dreams of candy canes and unicorns, while the system quietly evolves for its own convenience and survival. The ability to produce hoax science articles and get them published is merely showing how deep the sleep is. At some point, there comes a “My god, what have I done” moment, or perhaps folks hope to get out, retire, and fade away before such a realization might come upon them.

If money talks, then the outpouring of government money has persuaded university administrators and faculty alike that a system of competitive convenience, with all its pressures, its public sincerities, and its reports of innovation and economic impact, is proper and good, in need only of improvement, expansion, and more federal funding and more contribution from students going ever deeper into debt. It is not the total amount of spending on university research that matters, it is that the funding is not stupid funding, and that it does not go to expand a fool-bubble of administrative disinterest in scientific truth. The university administrative interest in science should be ever as demanding, and skeptical of scientific claims–if not more so–than that of the scientists themselves.

Administrative care about science is the start of public trust in university science, the basis for industry engagement, and the foundation for sound public policy. For their own good, university administrations should stop publishing press releases about their “economic impact from research–no more vanity studies with faux multipliers, no more selective reporting of expenditure metrics as if they are impact metrics, no more schemes to start shell companies and call it economic development.  No more being Paddy West.

Better, university administrations have to get out of the patent licensing business for profit, and perhaps even out of the management of extramural research. Migrate these functions to external agents. Administrations have abandoned their position of public trust with regard to research to pursue research money, with all sorts of rationales for doing so. They aren’t even breaking even in those efforts–it costs research universities more to spend extramural dollars than they bring in from sponsors. They have put the institutions at risk that they were entrusted with protecting. We are witness to the decay.

 

 

 

 

This entry was posted in Bad Science, Policy, Technology Transfer and tagged , , , , . Bookmark the permalink.