Tensor Contractions with Extended BLAS Kernels on CPU and GPU
- Author(s): Shi, Yang
- Niranjan, UN
- Anandkumar, Animashree
- Cecka, Cris
- et al.
Published Web Locationhttps://doi.org/10.1109/HiPC.2016.46
Tensor contractions constitute a key computational ingredient of numerical multi-linear algebra. However, as the order and dimension of tensors grow, the time and space complexities of tensor-based computations grow quickly. Existing approaches for tensor contractions typically involves explicit copy and transpose operations. In this paper, we propose and evaluate a new BLAS-like primitive STRIDEDBATCHEDGEMM that is capable of performing a wide range of tensor contractions on CPU and GPU efficiently. Through systematic benchmarking, we demonstrate the advantages of our approach over conventional approaches. Concretely, we implement the Tucker decomposition and show that using our kernels yields 100x speedup as compared to the implementation using existing state-of-the-art libraries.