Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

sl8

(16,223 posts)
Mon Aug 19, 2024, 07:18 AM Aug 19

Cash for catching scientific errors

https://www.nature.com/articles/d41586-024-02681-2

TECHNOLOGY FEATURE
19 August 2024

Cash for catching scientific errors

The ERROR project offers researchers a bounty for spotting mistakes in published papers — a strategy borrowed from the software industry.

By Julian Nowogrodzki

Malte Elson is blunt when it comes to science’s ability to self-correct. “The way we currently treat errors doesn’t work,” he says.

To prove his point, Elson, a psychologist at the University of Bern, highlights a well-known 2010 paper1 by economists Carmen Reinhart and Kenneth Rogoff at Harvard University in Cambridge, Massachusetts. “This paper became highly influential in financial policies in Europe,” says Elson, where it “promoted austerity measures to reduce national debt”. Three years later, Thomas Herndon, an economics PhD student at the University of Massachusetts Amherst at the time, tried to replicate the paper’s results for a class assignment and discovered an error in a crucial spreadsheet used in the paper. The authors had selected only 15 of the 20 countries they meant to include for a key calculation2. When this and two other errors were considered, the study’s conclusions were less strong than they initially appeared, Elson says.

This haphazard system of error detection makes no sense, Elson says. “We cannot seriously rely on coincidental discovery of errors.” Currently, looking for errors in published papers is neither systematic nor rewarded. Elson and his colleagues launched the Estimating the Reliability and Robustness of Research (ERROR) project in February to change that.

The ERROR project pays reviewers to check highly cited psychology and psychology-related papers for errors in code, statistical analyses and reference citations. The programme posted its first review in May — the first of 100 planned over 4 years. This month, the ERROR team aim to have the first 20 papers assigned to reviewers.

[...]

Latest Discussions»Culture Forums»Science»Cash for catching scienti...