Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

Tackling GPU Memory Size Limitations

Abstract

GPUs are now in widespread use for many non-graphics applications, like machine learning, scientific computations, and computer vision, but many challenges remain for achieving their full potential in many areas. Some algorithms and data structure operations, originally developed with sequential CPU architectures in mind, appear to be inherently serial in nature, and require new methods to adapt them to take advantage of the many-core GPU architecture. This dissertation describes methods for utilizing this massive parallelism to solve problems on large datasets while also grappling with the limitations on GPU memory size.

First, we present an approach to maximum clique enumeration (finding all maximum cliques in a graph) on the GPU via an iterative breadth-first traversal of the search tree. In order to achieve high performance on the GPU, our implementation aims to maximize available parallelism and minimize divergence between threads. The biggest challenge for a breadth-first implementation is the memory required to store all of the intermediate clique candidates. To mitigate this issue, we employ a variety of strategies to prune away non-maximum candidates and present a thorough examination of the performance and memory benefits of each of these options. We also explore a windowing strategy as a middle-ground between breadth-first and depth-first approaches, and investigate the resulting trade-off between parallel efficiency and memory usage. Our results demonstrate that when we are able to manage the memory requirements, our approach achieves high throughput for large graphs indicating this approach is a good choice for GPU performance. We demonstrate an average speedup of 1.9x over previous parallel work, and obtain our best performance on graphs with low average degree.

Finally, we present our GPU implementation of the quotient filter, a compact data structure designed to implement approximate membership queries. The quotient filter is similar to the more well-known Bloom filter; however, in addition to set insertion and membership queries, the quotient filter also supports deletions and merging filters without requiring rehashing of the data set. Furthermore, the quotient filter can be extended to include counters without increasing the memory footprint. We implement two types of quotient filters on the GPU: the standard quotient filter and the rank-and-select-based quotient filter. We describe the parallelization of all filter operations, including a comparison of the four different methods we devised for parallelizing quotient filter construction. In solving this problem, we found that we needed an operation similar to a parallel scan, but for non-associative operators. One outcome of this work is a variety of methods for computing parallel scan-type operations on a non-associative operator.

For membership queries, we achieve a throughput of up to 1.13 billion items/second for the rank-and-select-based quotient filter: a speedup of 3x over the BloomGPU filter. Our fastest filter build method achieves a speedup of 2.1--3.1x over BloomGPU, with a peak throughput of 621 million items/second, and a rate of 516 million items/second for a 70% full filter. However, we find that our filters do not perform incremental updates as fast as the BloomGPU filter. For a batch of 2 million items, we perform incremental inserts at a rate of 81 million items/second -- a 2.5x slowdown compared to BloomGPU's throughput of 201 million items/second. The quotient filter's memory footprint is comparable to that of a Bloom filter.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View