I have been following the “Climategate” situation involving the release of email and software from the University of East Anglia. Apart from what appears to be scientific fraud on the part of a number of scientists, I’m considering what this kind of thing means for technology transfer IP management, especially where software and data are concerned. In the US, under Circular A-110, __.36 (now 2 CFR 200.315), researchers receiving federal funds who publish papers that are used in agency policy-making are required to make their data available upon request. One might go further and argue that doing so also may require access to software and metadata. The point is: the data should stand up to scrutiny if it is going to be the subject of policy-making. And it’s clear that with the climate data, there was a heck of a lot of policy making going on, and the scientists knew it.
It appears that over in the UK, the UEA scientists worked hard to prevent the release of both data and software. Furthermore, they appear to have been helped in this by UEA administrators, who assisted in blocking freedom of information requests. It appears similar things have been happening in the US. If that’s the case, then it’s a tremendous blow to university science. One might come to believe now that universities are not in a particularly good position to investigate their own behaviors with regard to IP.
People are starting to speak out. A recent article in the Telegraph ends with:
“[Dr. Don Keiller, deputy head of life sciences at Anglia Ruskin University] said: “What these emails reveal is a detailed and systematic conspiracy to prevent other scientists gaining access to CRU data sets. Such obstruction strikes at the very heart of the scientific method, that is the scrutiny and verification of data and results by one’s peers.”
Professor Darrel Ince, from the department of computer science at the Open University, added: “A number of climate scientists have refused to publish their computer programs; what I want to suggest is that this is both unscientific behaviour and, equally importantly ignores a major problem: that scientific software has got a poor reputation for error.”
From the perspective of technology transfer, whatever the interest in “commercialization” may be, a university must be in a position to encourage–if not insist–that data, metadata, and software used to support claims made in the scholarly literature also be made readily available. This is the basis of independently validated science.
If one goes back to the five areas of IP practice–CANVIS–then it’s clear that scholarship IP practices must take precedence. Otherwise, it’s not university research, really. What technology transfer offices haven’t worked out, in general, however, is how to make IP available for research purposes (including research at for-profit organizations). University TLOs still do not routinely reserve rights for all research purposes (usually only for use at their own institution), and do not establish in exclusive license agreements that the licensee has no right to sue to prevent research uses.
We can get into the differences between evaluation of claimed data, studies on that data, and research that makes use of the data (such as, as an operational tool). But the starting point is the necessity for universities to make available data, metadata, and source codes for inspection.