Skip to main content
eScholarship
Open Access Publications from the University of California

gSoFa: Scalable Sparse Symbolic LU Factorization on GPUs

Abstract

Decomposing a matrix $\mathbf {A}$A into a lower matrix $\mathbf {L}$L and an upper matrix $\mathbf {U}$U, which is also known as LU decomposition, is an essential operation in numerical linear algebra. For a sparse matrix, LU decomposition often introduces more nonzero entries in the $\mathbf {L}$L and $\mathbf {U}$U factors than in the original matrix. A symbolic factorization step is needed to identify the nonzero structures of $\mathbf {L}$L and $\mathbf {U}$U matrices. Attracted by the enormous potentials of the Graphics Processing Units (GPUs), an array of efforts have surged to deploy various LU factorization steps except for the symbolic factorization, to the best of our knowledge, on GPUs. This article introduces gSoFa, the first GPU-based symbolic factorization design with the following three optimizations to enable scalable LU symbolic factorization for nonsymmetric pattern sparse matrices on GPUs. First, we introduce a novel fine-grained parallel symbolic factorization algorithm that is well suited for the Single Instruction Multiple Thread (SIMT) architecture of GPUs. Second, we tailor supernode detection into a SIMT friendly process and strive to balance the workload, minimize the communication and saturate the GPU computing resources during supernode detection. Third, we introduce a three-pronged optimization to reduce the excessive space consumption problem faced by multi-source concurrent symbolic factorization. Taken together, gSoFa achieves up to 31× speedup from 1 to 44 Summit nodes (6 to 264 GPUs) and outperforms the state-of-the-art CPU project, on average, by 5×. Notably, gSoFa also achieves up to 47 percent of the peak memory throughput of a V100 GPU in the Summit Supercomputer.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View