Thu. Jan 20th, 2022

The results here are much less obvious. The extensive complementary materials handed over by the replication team helpfully distinguish between “reproducibility” (if you do it again with the same data and method, would the result of a test be the same?) And “Replicability(Can a new, overlapping test with new data reliably yield similar results?)

The COS team has tried to be clear about how messy all this is. If a test fails to copy, it does not mean that it is reproducible. This could be a problem with the copy, not the original work. Conversely, an experiment that someone can reproduce or replicate perfectly is not necessarily accurate, and it is not necessarily useful or novel.

But the truth is, 100 percent pure replication is not really possible. Even with the same strain of the same cell line or genetically-tweaked rat, different people experimented differently. Maybe those who didn’t have the materials to complete the transcript team could have done better. Perhaps the most prestigious journal’s “high-impact” articles are bold, risk-taking work that is less likely to be replicated.

Cancer biology has high stakes. After all, it is supposed to lead to life-saving drugs. The work that was not replicated for Arrington’s team probably did not cause any dangerous drugs or harm to any patient, as Phase 2 and Phase 3 trials expelled bad seeds. According to the Biotechnology Industry Organization, Only 30 percent of drug candidates pass it through Phase 2 trials, and only 58 percent pass it through Phase 3. (Good for determining safety and efficacy, bad for blowing up all research money and increasing drug costs.) But drug researchers Acknowledge, Silently, that most approved drugs do not work well at all – especially Cancer Of drugs.

Science obviously works, extensively. So why is it so hard to copy an exam? “One answer is: science is difficult,” Arrington said. “So we fund research and invest billions of dollars to make sure cancer research can have an impact on people’s lives. Which does. “

The point of less-extraordinary results, such as the Cancer Project, is to differentiate between what is good for science internally and what is good for science when it reaches civilians. “There are two orthogonal concepts. One is transparency and the other is legitimacy,” said Shirley Wang, an epidemiologist at Brigham and Women’s Hospital. He is the co-director of Reproducible Evidence: Practice to Enhance and Achieve Transparency: The Repetition Initiative, which has replicated 150 studies that have used electronic health records as their data. (Wang’s repeat paper has not yet been published.) “I think the problem is that we both want to get together,” he says. “You can’t tell if it’s a good quality science unless you can be clear about the method and the fertility. But if you can, that doesn’t mean it’s a good science. “

The point, then, is not to criticize specific results. This should make science more transparent, so that the results should be more reproducible, more comprehensible, perhaps more likely to be translated into the clinic. At the moment, there is no incentive for academic researchers to publish work that other researchers can replicate. The stimulus just expressed. “The metric for success in academic research is a paper published in a top-level journal and the number of quotes from the paper,” Begley said. “For the industry, the metric of success is a drug on the market that works and helps patients. So at Amazon we couldn’t invest in a program that we knew from the beginning that we really didn’t have legs. “

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *