- Deiana, Allison McCarn;
- Tran, Nhan;
- Agar, Joshua;
- Blott, Michaela;
- Di Guglielmo, Giuseppe;
- Duarte, Javier;
- Harris, Philip;
- Hauck, Scott;
- Liu, Mia;
- Neubauer, Mark S;
- Ngadiuba, Jennifer;
- Ogrenci-Memik, Seda;
- Pierini, Maurizio;
- Aarrestad, Thea;
- Bähr, Steffen;
- Becker, Jürgen;
- Berthold, Anne-Sophie;
- Bonventre, Richard J;
- Bravo, Tomás E Müller;
- Diefenthaler, Markus;
- Dong, Zhen;
- Fritzsche, Nick;
- Gholami, Amir;
- Govorkova, Ekaterina;
- Guo, Dongning;
- Hazelwood, Kyle J;
- Herwig, Christian;
- Khan, Babar;
- Kim, Sehoon;
- Klijnsma, Thomas;
- Liu, Yaling;
- Lo, Kin Ho;
- Nguyen, Tri;
- Pezzullo, Gianantonio;
- Rasoulinezhad, Seyedramin;
- Rivera, Ryan A;
- Scholberg, Kate;
- Selig, Justin;
- Sen, Sougata;
- Strukov, Dmitri;
- Tang, William;
- Thais, Savannah;
- Unger, Kai Lukas;
- Vilalta, Ricardo;
- von Krosigk, Belina;
- Wang, Shen;
- Warburton, Thomas K
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science-the concept of integrating powerful ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.