- Kapoor, Sayash;
- Cantrell, Emily M;
- Peng, Kenny;
- Pham, Thanh Hien;
- Bail, Christopher A;
- Gundersen, Odd Erik;
- Hofman, Jake M;
- Hullman, Jessica;
- Lones, Michael A;
- Malik, Momin M;
- Nanayakkara, Priyanka;
- Poldrack, Russell A;
- Raji, Inioluwa Deborah;
- Roberts, Michael;
- Salganik, Matthew J;
- Serra-Garcia, Marta;
- Stewart, Brandon M;
- Vandewiele, Gilles;
- Narayanan, Arvind
Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear recommendations for conducting and reporting ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist (recommendations for machine-learning-based science). It consists of 32 questions and a paired set of guidelines. REFORMS was developed on the basis of a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility.