- Schulz, Robert;
- Barnett, Adrian;
- Bernard, René;
- Brown, Nicholas;
- Byrne, Jennifer;
- Eckmann, Peter;
- Gazda, Małgorzata;
- Kilicoglu, Halil;
- Prager, Eric;
- Salholz-Hillel, Maia;
- Ter Riet, Gerben;
- Vines, Timothy;
- Vorland, Colby;
- Zhuang, Han;
- Bandrowski, Anita;
- Weissgerber, Tracey
The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the studys conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.