- Farrell, Steve;
- Vose, Aaron;
- Evans, Oliver;
- Henderson, Matthew;
- Cholia, Shreyas;
- Pérez, Fernando;
- Bhimji, Wahid;
- Canon, Shane;
- Thomas, Rollin;
- Prabhat
- Editor(s): Yokota, Rio;
- Weiland, Michèle;
- Shalf, John;
- Alam, Sadaf R
Deep learning researchers are increasingly using Jupyter notebooks to implement interactive, reproducible workflows with embedded visualization, steering and documentation. Such solutions are typically deployed on small-scale (e.g. single server) computing systems. However, as the sizes and complexities of datasets and associated neural network models increase, high-performance distributed systems become important for training and evaluating models in a feasible amount of time. In this paper we describe our vision for Jupyter notebook solutions to deploy deep learning workloads onto high-performance computing systems. We demonstrate the effectiveness of notebooks for distributed training and hyper-parameter optimization of deep neural networks with efficient, scalable backends.