More than half of psychology research papers can not pass replication test
- August 31, 2015
- 851 Views
- 0 Likes
- 0 Comment
Only 39 out of a sample of 100 recently published scientific reports in psychology stood up to an attempt to replicate the findings, according to a new analysis.
That doesn't prove all the original findings were wrong, but it does raise concerns, according to the authors of the study and others.
It “suggests that there is still more work to do to verify whether we know what we think we know,” the authors wrote, publishing their findings in the Aug. 28 issue of the journal Science.
That original experiments can be reproduced with similar results “is a core principle of scientific progress,” they wrote. The idea is that “scientific claims should not gain credence because of the status or authority of their originator but by the replicability of their supporting evidence.”
But there is little data showing how many past studies hold up to this scrutiny, they added. The concern is that the search for truth may be compromised by a widespread bias putting too much of a premium on results that are simply interesting. Such a bias can affect both scientists, and editors selecting their work to appear in the academic journals that disseminate their findings.
“This very well done study shows that psychology has nothing to be proud of when it comes to replication,” Charles Gallistel, president of the Association for Psychological Science, told Science in a news article accompanying the findings.
But the authors and others stressed that the problem isn't unique to psychology.
“We investigated the reproducibility rate of psychology not because there is something special about psychology, but because it is our discipline,” wrote the researchers, collaboration led by Brian Nosek of the University of Virginia.
“Concerns about reproducibility are widespread across disciplines,” they added. “Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. If nothing else, this project demonstrates that it is possible to conduct a large-scale examination of reproducibility despite the incentive barriers.”
The paper's 270 contributing authors, part of an effort known as the Reproducibility Project, collaborated with the authors of the original findings in carrying out replication experiments.
“A direct replication may not obtain the original result for a variety of reasons,” the report cautioned. Either study could be wrong. And “known or unknown differences between the replication and original study may moderate the size of an observed effect.”
The authors defined successful replication as depending on several criteria. One was whether the re-run of an experiment matched the original in yielding what are considered statistically “significant"-or statistically insignificant-results.
Looking at this measure, they noted, 97 of the original studies, but only 36 of the replications, had statistically significant results. Overall, “replication effects were half the magnitude of original effects, representing a substantial decline,” they wrote.
But despite the importance of reproducibility, the authors stressed, it shouldn't always be expected “from the onset of a line of inquiry through its maturation.”
“This is a mistake. If initial ideas were always correct, then there would hardly be a reason to conduct research in the first place. A healthy discipline will have many false starts as it confronts the limits of present understanding.”
Source : http://www.world-science.net