- Ayachit, Utkarsh;
- Bauer, Andy;
- Duque, Earl PN;
- Eisenhauer, Greg;
- Ferrier, Nicola;
- Gu, Junmin;
- Jansen, Kenneth E;
- Loring, Burlen;
- Lukić, Zarija;
- Menon, Suresh;
- Morozov, Dmitriy;
- Leary, Patrick O;
- Ranjan, Reetesh;
- Rasquin, Michel;
- Stone, Christopher P;
- Vishwanath, Venkat;
- Weber, Gunther H;
- Whitlock, Brad;
- Wolf, Matthew;
- Wu, K John;
- Bethel, E Wes
A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. This paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead, performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.