Artificial intelligence: more transparency required

Artificial intelligence: more transparency required

New research invites researchers to be transparent about codes and models used in the development of artificial intelligence

A group of scientists wants to push researchers and scientific journals to raise the standards of transparency and reproducibility of the models used for the development of artificial intelligence. Sharing code, models and computing environments can help scientific research standards.

Artificial intelligence: more transparency required

Artificial intelligence: we have a problem

A article published in January 2020, it stated that a system of artificial intelligence could outperform human radiologists in breast cancer screening. The study caused a sensation in the scientific community and was also covered by many newspapers. A closer examination has however raised some concerns. The study lacked a sufficient description of the methods used, including the code and calculations used. The lack of transparency on data it prevented other researchers from understanding exactly how the model works and how it might be applied in their own institutions. In theory, many said, the study was also well done. However, the impossibility of duplicating it and learning how it was developed makes its scientific value almost useless. According to Dr. Haibe-Kains, a professor of biophysics at the Vector Institute for Artificial Intelligence, this is just one example of a problematic model in computational research.

Artificial intelligence: more transparency required

The invitation to transparency

In an article published in Nature, scientists from the University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health and MIT invite scientific journals and computational researchers to higher standards of transparency. The growing development of artificial intelligence models is increasing perplexity among many authors. While on the one hand it demonstrates a growing development and interest in the sector, on the other it must not escape scientific rules. Magazines, vulnerable to the charm ofartificial intelligence, they may lower standards and accept articles that do not include all the material needed to make the study reproducible, often in contradiction with their own guidelines. In thearticle the authors offer numerous frameworks and platforms that allow safe and effective dissemination of data. To support the three pillars of open science, the researchers conclude, we need: data sharing, code sharing and model reliability sharing.

Artificial intelligence: more transparency required

The risks of non-transparency

The lack of transparency it’s not just a problem with artificial intelligence research. Researchers from all fields are pressured by the “publish or perish“. Having to publish many articles quickly risks reducing their quality by decreasing the details. Failing to reproduce experiments however, it is a problem. Researchers fail to understand how models work to replicate them thoughtfully. This makes research more difficult in the long run, preventing or slowing down any clinical relapses. In some cases, it could lead to clinical trials unjustified, because what works on one group of patients or in one institution may not be appropriate for another.

Follow us in our section sciences for other news!