Scientists could help steer their fields in the right directions if a whole bunch of them get together and start betting on which new results seem most believable, a study suggests.
Just one scientific result doesn’t mean much. To know whether it’s valid, the experiment needs to be repeated many times with the same result. But people are people; mistakes, flukes and even fraud happen; and there is no time or money to re-run every single study. As a result, irreproducible research regularly finds its way into even respected scientific journals.
This is especially problematic for drug trials and other clinical research. A recent estimate put the costs associated with irreproducible pre-clinical research at $28 billion a year in the United States.
In the new study, the researchers looked for some method to identify studies that were of greater concern and needed to be re-tested.
The researchers, Yiling Chen, a computer scientist at Harvard University, and colleagues, turned to prediction markets—investment platforms that reward traders for correctly predicting future events. They found that these markets correctly predicted replicability in 71 percent of the cases studied. The team chose 44 studies published in prestigious journals that were in the process of being re-tested or where the re-test results weren’t yet known.
“This research shows for the first time that prediction markets can help us estimate the likelihood of whether or not the results of a given experiment are true,” said Chen. “This could save institutions and companies time and millions of dollars in costly replication trials and help identify which experiments are a priority to re-test.”
Sixty-one percent of the replications used in this study did not reproduce the original results.
“Top psychology journals seem to focus on publishing surprising results rather than true results,” said Anna Dreber, of the Stockholm School of Economics and a co-author of the paper. “Surprising results do not always hold up under re-testing. There are different stages at which an hypothesis can be evaluated and given a probability that it is true. The prediction market helps us get at these probabilities.”
The research was published in the Proceedings of the National Academy of Sciences.
Prediction markets are gaining popularity in a number of realms beyond economics, especially in politics. In prediction markets, investors make predictions of future events by buying shares in the outcome of the event. Greater popularity of a particular outcome drives up its price in the market, so that price indicates what the crowd thinks the probability of the event is.
Pollsters and pundits are relying more and more on prediction markets to forecast elections and other events because prediction markets rely on the average answer of a group of well-informed participants, otherwise known as the wisdom of the crowd.
The researchers set up markets for each study and provided their pool of traders—all psychologists—$100 to invest. The participants chose to invest anywhere between one and 99 cents on the outcome of the event—in this case, whether or not the research could be reproduced.
If the price for “reproducible” shares are low when the market closes, that means that most people in the field don’t believe the experiment can be replicated.
“One of the advantages of the market is that participants can pick the most attractive investment opportunities,” said Thomas Pfeiffer, a co-author and professor of computational biology at the New Zealand Institute for Advanced Study. “If the price is wrong and I’m confident I have better information than anyone else, I have a strong incentive to correct the price so I can make more money. It’s all about who has the best information.”
“Our research showed that there is some ‘wisdom of the crowd’ among psychology researchers,” said Brian Nosek, co-author and professor of psychology at the University of Virginia. “Prediction accuracy of 70 percent offers an opportunity for the research community to identify areas to focus reproducibility efforts to improve confidence and credibility of all findings.”
The next step in the research, the investigators said, is to test whether this might work in other fields, such as economics and cell biology.