- Tang, Houjun;
- Byna, Suren;
- Tessier, François;
- Wang, Teng;
- Dong, Bin;
- Mu, Jingqing;
- Koziol, Quincey;
- Soumagne, Jerome;
- Vishwanath, Venkatram;
- Liu, Jialin;
- Warren, Richard
- Editor(s): El-Araby, Esam;
- Panda, Dhabaleswar K;
- Gesing, Sandra;
- Apon, Amy W;
- Kindratenko, Volodymyr V;
- Cafaro, Massimo;
- Cuzzocrea, Alfredo
Emerging high performance computing (HPC) systems are expected to be deployed with an unprecedented level of complexity due to a deep system memory and storage hierarchy. Efficient and scalable methods of data management and movement through this hierarchy is critical for scientific applications using exascale systems. Moving toward new paradigms for scalable I/O in the extreme-scale era, we introduce novel object-centric data abstractions and storage mechanisms that take advantage of the deep storage hierarchy, named Proactive Data Containers (PDC). In this paper, we formulate object-centric PDCs and their mappings in different levels of the storage hierarchy. PDC adopts a client-server architecture with a set of servers managing data movement across storage layers. To demonstrate the effectiveness of the proposed PDC system, we have measured performance of benchmarks and I/O kernels from scientific simulation and analysis applications using PDC programming interface, and compared the results with existing highly tuned I/O libraries. Using asynchronous I/O along with data and metadata optimizations, PDC demonstrates up to 23× speedup over HDF5 and PLFS in writing and reading data from a plasma physics simulation. PDC achieves comparable performance with HDF5 and PLFS in reading and writing data of a single timestep at small scale, and outperforms them at a scale of larger than 10K cores. In contrast to existing storage systems, PDC offers user-space data management with the flexibility to allocate the number of PDC servers depending on the workload.