I remain awed by of the scope of science, and I have a favorite illustration. Take a look at the chart, “The scale of the universe mapped to the branches of science and the hierarchy of science.” (http://en.wikipedia.org/wiki/Science#cite_note-1) The flow of logic and mathematics into physics and chemistry and on to cellular and functional biology before ending in psychology and sociology fascinates me…all of this continually augmented by the ever growing mass of information about the natural world and the human condition… and all produced by research. Most often, the results of this research are presented at scientific meetings and published in scientific journals. Both of these activities are presumably peer-reviewed, and the information is eventually translated into the media experience which now envelops us.
Most of this work is well done. We would not have progress, as we know it, without it. But simply because scientific papers are reviewed, published, and popularized does not necessarily make their research reproducible. It does not necessarily make them well evaluated. It does not necessarily, in fact, make them without bias, accurate, or even honest. All of this work is done by people. As people, we have limits. For example, until we develop new tools for exploration, we are limited by the ones we have. As people, we are susceptible to error of all types. The list is long. Also, as people, we also have individual needs, and some of us have less integrity than others. Hence the need for skepticism and doubt. Let’s dig a bit deeper…
Results and Reproducibility
Many authors use “reproducible” and “replicable” synonymously. Not all, however, and here is both some of the delight and some of the frustration I have with language. Chris Drummond is an expert in machine learning at the National Research Council of Canada. He argues that replicating studies does not advance knowledge in the same way reproducing them does, nor are they good science. (http://www.site.uottawa.ca/ICML09WS/papers/w2.pdf)
Drummond’s thoughts on what is good science and what is knowledge are provocative and instructive. And even if his distinctions prove little more than semantic rhetoric, our observations, ideas, and results need substantiation in some form if we are to act with confidence.
All too often, the results of published research cannot be achieved by duplicating the original experiments, leaving us in doubt about the validity of the initial work. In 2005 John Ioannidis, Professor of Medicine at Stanford, published one of the most cited articles in current scientific literature. The title… “Why Most Published Research Finding Are False.” (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/) In 2012, “Psychological Science,” the journal for the Association for Psychological Science, devoted an entire section of the November issue to a crises of confidence (http://pps.sagepub.com/content/7/6/528.full.pdf+html) on replicability in psychological research. And if you think things have recently improved, a massive, collaborative study attempting to reproduce results published in three different, respected psychology journals resulted in only 35 of 97 successful replications. (http://etiennelebel.com/documents/osc(2015,science).pdf) I can’t resist quoting from the end of the summary portion of the article: “…there is still more work to do to verify whether we know what we think we know.”
Research and Peer Review
Peer review has proven a poor filter. A 2013 article in The Economist titled “Trouble at the Lab” (http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble) chronicles the many problems with peer review. These range from the embarrassing consequences of clever stings by a Harvard biologist and another by an editor of the British Medical Journal on through failure by reviewers to conduce their own analysis of the data presented or adequately assess the experimental purpose or design. Moreover, the performance of reviewers declines with experience (http://www.annemergmed.com/article/S0196-0644(10)01266-7/abstract). Would that reviewers aged as well as wine.
There also are multiple references regarding egregious fraud as well as other dodgy practices in the articles cited. To me, this is the saddest. I do, however, understand the motivation. When I most frequented the research laboratory in the decade between 1965 and 1975, I viewed the authorship or co-authorship of published papers as “academic gold.” Those investigators with an extensive bibliography got jobs, grant money, promotions, and awards. Unsurprisingly, this has not changed. (http://pps.sagepub.com/content/7/6/528.full.pdf+html)
Intense competition dominates academia. Without integrity, the lure of easy and rapid advancement of one’s career is powerful.
So, How Should We Look at Scientific Research?
Most of us, including me, do not gain information or insight by reading scientific papers. We read or hear the sanitized and popularized received information available from the media, often in multiple formats. This translation further enhances the possibility of error, lack of due diligence, or dishonesty. Maintaining some degree of skepticism or doubt concerning what we are told is important. The philosopher, George Santayama, has given us another, more organic, view of this quality. “Skepticism is the chastity of the intellect, and it is shameful to surrender it too soon or to the first comer: there is nobility in preserving it coolly and proudly through long youth, until at last, in the ripeness of instinct and discretion, it can be safely exchanged for fidelity and happiness.” (www.philosophicalsociety.com/archives/skepticism.htm)
We all benefit from the insights new revelations give us, yet we need to maintain some reserve regarding them until they are confirmed.