Steve Fiori on the SCISP listserv called the list’s attention to a blog post by David Funder, a research psychologist at UC Riverside. Funder’s post discusses a recent NSF workshop that took up the issue of replication of research results. This issue goes to the heart of a claim about research science–that published findings are generally reliable. But as is becoming clear, peer review is not doing an adequate job in screening publications. And, really, why should it do more than catch bits of problem with argument? How can a reviewer check the raw data, or the analysis tools, or the actual conditions of the experiment, or the other data, from the other experiments and analysis, that’s not reported?
One can also ask researchers to be “ethical” and “diligent” and “smart.” Like Feynman, we can ask researchers not to fool themselves, and then be conventionally honest with the rest of us. But that, too, doesn’t appear to be working. Perhaps it is too easy for us to fool ourselves that we have not fooled ourselves. Continue reading