Cancer researchers may overestimate reliability of mouse studies
Cancer researchers may have too much confidence in their ability to repeat experiments in mice and to achieve similar results the second time, according to a new study that offers new evidence of a so-called crisis of reproducibility in medical research.
For the study, researchers asked 196 scientists to predict whether the six experiments published in leading medical journals mice could be replicated to achieve the same results with the same effect size and the same level of statistical significance, which means that Results are simply not due to chance.
None of the six experiments that researchers have asked scientists to examine the same meaning or effect of statistical size when researchers reproducibility of the project were repeated: Cancer Biology, a collaboration between Change Science and the Center for The open science that tests reliability independently of experiments published in major medical journals.
However, on average, the scientists participating in the survey predicted a 75 percent probability to replicate statistical significance and a 50 percent probability to get the same effect size.
“This is the first study of its kind, but it deserves more study to understand how scientists interpret the major reports,” said study lead author Jonathan Kimmelman of McGill University in Montreal.
“I think there’s probably good reason to think that some of the problems we have in science are not because people are silly, but because there is not room to improve the way they interpret the results,” Kimmelman said of e-mail.
The paper follows numerous reports exploring the reproducibility of biomedical crisis. In the last 10 or 15 years, the concern that some techniques and practices used in biomedical research have led to inaccurate assessments of the clinical promise of a growing drug, Kimmelman team wrote in PLoS Biology, online 29 June.
However, the findings increase the likelihood that training may help many scientists overcome some cognitive biases that affect their interpretation of scientific reports, researchers suggest.
The study team asked scientists and students in education sciences for elite education programs to evaluate the six mice experiments, and found that more experienced and influential scientists tended to be more accurate.
“What is surprising is that researchers are not very accurate, in fact, are less accurate than chance in predicting whether a study will be repeated,” said Dr. Benjamin Neel, director of the Cancer Center Perlmutter New York University .
Participants in the study reported their own level of experience in the field, but they are not on average the most influential scientists, measured by the number of publications they have had and how often others have cited their work, Neel, Who did not participate in the study, said by e-mail.
“Probably the biggest reason the studies do not take into account is because the sample size is too small. For example, if only 5 to 10 mice are used, a study of 50 animals can not give The same result, “said Neel.
“I think all preclinical results need to be validated by an independent laboratory before they are used as a basis for clinical trials,” he added.
For patients, even experiments that fail in mice and can not be reproduced can help scientists determine if possible human search, a necessary step to discover new treatments, said Dr. Anthony Olszanski, director of the development phase program Of therapy 1 at the Fox Chase Cancer Center in Philadelphia.
“Today it is very difficult to predict whether promising results in preclinical studies were observed in humans,” said Olszanski, who did not participate in this study, via email. “However, the most effective anticancer agents in use today have become drugs based in part on the results of early investigators.”