Government Funding For Research, 1

Out in Twitterland, I saw this tweet by Brett Blackham:

Arguably, research and development is so important that government should have nothing to do with it. However since 1980 a company or university could get government money to do research & still be issued patent monopolies.

It’s easy enough to dismiss the tweet as silly or ignorant. Government letting companies deal in patent monopolies goes back to at least the 1963 Kennedy patent policy, and letting universities deal in patent monopolies for health inventions started in earnest with the NIH’s Institutional Patent Agreement program restarted in 1968.

But let’s not dismiss this tweet. Instead, let’s consider it as an interesting proposition. After all, if something that’s happening is a really great thing, like government funding for research, then there should be a good case to be made reciting the fundamentals that shows just how it is a great thing. So, let’s at it.

First, let’s add a few things to the potential problems with government funding, just to make its defense not merely facile.

Here’s John Ioannidis, discussing the problems of published science (my emphasis):

Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections, usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. 

We might ask, does the opportunity to get further federal funding create an environment where researchers must make their currently funded study appear productive in order to get the next grant? And might researchers also feel a need to make the government’s choice of research topics appear to be the right one?

Sure, Ioannidis’s critique is limited to misuse of statistics to argue for significance of findings, and he isn’t talking just about federally supported work or inventions in particular, but a lot of research at universities–about 60%–is funded by the federal government.

Science in the News, a blog at Harvard University, points out in “House of Cards: Is something wrong with the state of science” that almost 80% of candidate compounds fail in pre-clinical trials, and 85% fail in early-stage clinical trials. Makes one wonder about the promptings of published science that companies rely upon to make a go at developing new drugs. And that’s a flip side to the argument that most recently FDA-approved drugs have benefited from federal support. If a few hundred new drugs have relied on federally funded research, consider that in the context of maybe 10 times as many compounds that federally funded research has been mistaken about–something not so happily reported by people trying to make the case for federal funding in the area of drug development.

The SITN blog comments on Ioannidis’s argument that most published studies are false:

The reasons for this are multifarious, but studies that are small scale, that look for small effects, that have high flexibility in what is tested and how results are assessed, and that are in a “hot” field are all more likely to report positive results when they shouldn’t. This is not an implication of fraud or scientific misconduct; instead, a fundamental reason behind this is that such studies are more likely to be influenced by biases.

Consider also this study by an Amgen researcher of published claims regarding cancer:

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

When Amgen researchers followed up with the authors of the papers they were attempting to replicate, they got answers like–yes, we had a problem with replication, too, but we went with the output that was the most likely to be publishable. And these are not just throw-away studies–they are the top studies in their field, “landmark” studies. Begley and Ellis’s Nature commentary is here. Another study found similar problems in replication of published research:

We received input from 23 scientists (heads of laboratories) and collected data from 67 projects, most of them (47) from the field of oncology. This analysis revealed that only in ∼20–25% of the projects were the relevant published data completely in line with our in-house findings (Fig. 1c). In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects because the evidence that was generated for the therapeutic hypothesis was insufficient to justify further investments into these projects.

A survey of researchers found that over 70% of researchers reported that they had failed in attempts to replicate published claims.

More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature‘s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.

In 2021, Nature published a follow-up study that found that less than half of high-profile publications on cancer could not be replicated:

By one analysis, only 46% of the attempted replications confirmed the original findings. And, on average, the researchers observed effect sizes that were 85% smaller than originally reported.

There’s also a problem of diminishing effect in reproducibility studies:

Many scientifically discovered effects published in the literature seem to diminish with time. Dubbed the decline effect, this puzzling anomaly was first discovered in the 1930s in research into parapsychology, in which the statistical significance of purported evidence for psychic ability declined as studies were repeated. It has since been reported in a string of fields — both in individual labs (including my own) and in meta-analyses of findings in biology and medicine.

Ioannidis also published a follow-on article arguing that most clinical research studies, even when not false, were not clinically useful:

However, a lot of “basic” investigation does have anticipated deliverables, like research into developing new drug targets or new tests. This research may best be funded by industry and those standing to profit if they deliver a product that is effective. Much current public funding could move from such preclinical research to useful clinical research, especially in the many cases in which a lack of patent protection means there is no commercial reason for industry to fund studies that might nevertheless be useful in improving care.

We might then ask–is government funding for research creating an environment where the publication of iffy results as demonstrable fact is accepted as just the way things go? Ioannidis argues that industry might be better positioned to propose research that would have clinical significance. This suggestion mirrors the findings of the 1968 Harbridge House report, which found that inventions made at companies were much more likely to be used that were inventions that were licensed in (or licensed out). But Ioannidis’s suggestion runs a different direction, too, that it’s not the federal government funding companies, then (and choosing what companies to fund, and what proposals to fund), but people at the companies making those choices, with their own money, with as it were, skin in the game.

This line of reasoning runs counter to that of the economist Dean Baker, who has proposed that the federal government could fund all of drug development now being funded by the pharma industry, pay companies to produce new drugs on contract, and save billions of dollars over what it is spending now on prescription drugs. The Ioannidis hypothesis is that company money may well be better at finding new drugs than federal money. That does not get at the problem of price gouging using patents to exploit the sick, but it does suggest that if we wanted better medicines, we might consider that government funding might be a greater problem than it is a benefit.

And this idea of “skin in the game” has its own interest. The behavior of a sponsor of research appears to matter. Disinterested money has its downsides. And even then, federal money is not disinterested. At the very least, if one is to receive federal money, and desires to receive more in the future, one does not go out of the way to make the federal agencies look bad–either by their choice of what to study, or their choice in who to fund, or their expectations for findings. One does not get global climate change funding from a federal agency to demonstrate that global climate change lacks an evidentiary basis in recent history. The research that gets funded accepts the federal agency’s premises for what is important, and how it is important. One does not bite the research narrative that feeds it.

The “skin in the game” for federal agencies appears to be two-fold–to train researchers and to benefit the public downstream–call it “basic” research or “early stage” research, but recognize that these are placeholders for “don’t expect much” and “whatever gets done will have to be revised at great expense by others because the government won’t follow up with funding to develop published findings or inventions. Federal agencies care about looking good–at least good enough to get more funding from Congress. Federally supported researchers generally must align with this goal, too, and not embarrass their government funding sources. Don’t you think? I was involved in an NSF-funded research study of its nanotechnology research. Big initiative, high visibility. But what we found was that by spreading the research funding around to scores of universities in overlapping areas, each university filed patents on its little bits of carbon nanotubes and one had to have twenty or thirty patent licenses to practice anything coming out of that work–and that was impossible because the universities were hopelessly addicted to exclusive licensing deals, so the IP was fragmented in a manner that the only response was to wait 20 years for patents to expire. This part of our work was never published, as far as I know. One can see why–it would embarrass the NSF and the PIs involved could expect not to get more NSF funding in the future.

If one is funded by a company to do science, and comes up with science that says the company should not spend more money on development in some area, that’s a positive. If one is funded by a federal agency, and comes up with science that says the federal agency is wasting its time funding in that area, that’s a negative. The skin in the game for federal agencies is, apparently, to get more money from Congress. For that, agencies need to receive credit for their funding, and need to ascribe public benefit, or at least the great potential for public benefit, to the research they have funded. Yes, go re-read Toby Appel’s Shaping Biology to see the underlying, repeated theme.

I present here, then, a sketch of just one aspect of Blackham’s proposal that research and development is too important to have government involved. Most published research is just bad–and we are not even talking about faked results or reporting results at two weeks when the data at two months indicates the reverse. We are looking only at study design fails that still get federal funding, and no consequences for subsequent federal funding for publishing results that don’t hold up, can’t be replicated, overstate the effect. There’s more funding tomorrow, no matter what, so long as you don’t embarrass or confront the agency that supports your work.

In 2009, Congress allocated $10 billion in “stimulus” money for research at the NIH–the ARRA program. $8 billion went to extramural research.

The ARRA moneys provided to NIH for “extramural” distribution include $8.2 billion for research grants, $1 billion to support construction and renovation at NIH-funded research institutions, and $300 million for the purchase of scientific equipment. An additional $500 million will support improvements and construction at NIH’s own research facilities.

What did the NIH do? Did it fund new, best projects? No–and here’s the amazing bit–it broadened its scope of funding to include proposals that were scored so low that they would otherwise not have been funded. Sure, those outlier proposals may have been ones suggesting science beyond the mainstream, but it’s likely that instead, what got funded was of lower quality, even by NIH’s standards. Yes, they didn’t change their funding system at all–just funded poorer proposals. All in a day’s work.

If I can find the time, I will push this idea some more. Was Vannevar Bush right that the response to the federal government’s interest in science should be to fund more science, lots more science? Is the government’s dominance in providing research funding creating a bias in curiosity towards what the government is willing to fund? Do universities in adjusting to federal funding in the form of “research as an industry” turn the benefit of conducting research into indirect cost funding for research administration and research facilities rather than taking on the really hard problems and solving them? Fifteen years ago, some Stanford faculty called for the university to cap its research expenditures and focus on quality rather than continually straining research facilities by adding more. That idea went nowhere. And even if government funding for research is a good thing in the abstract, is the present system going about it all wrong? Why proposal-based funding, where faculty have to spend over 100 hours per proposal, and have to apply for maybe ten grants to have a hope of getting even one? Why not person-based funding? Why not contract funding with the person calling the shots already being a scientist or technologist ready to have at it to get something done? And we might look at the great SBIR fail–especially at the NSF, where small companies find it easier to get more SBIR funding than to get a product on the market. I’ve worked with a number of SBIR funded companies. They never get to product. I’ve worked with a number of companies that had really different ideas for technology–not on the NSF list of favored research themes–who never get SBIR funding. For these companies, government funding for research has not worked at all, other than to perpetuate more research.

So think about it, outside the box. Maybe problems in science, and in productivity, and anything new, do have something to do with federal funding for research. Maybe the status quo needs to be dismantled, shifted–not merely “improved.” I know, it’s like imagining a dream world where directions can be shifted to things that might actually work, or work out a lot better than we’ve got. And there’s part of it–that dream world requires some tough assessment of what we are at, and despite all the notice of warnings, that’s an assessment a lot of people would rather not make.

This entry was posted in Bad Science, Innovation, Sponsored Research and tagged , . Bookmark the permalink.