- Main
Optimization-Based Mappers and Lower Bounds for Tensor Problems
- Dinh, Grace
- Advisor(s): Demmel, James W
Abstract
Obtaining good performance for tensor problems requires computing an architecture-specific mapping from the algorithm to the target hardware. Choosing a mapping requires optimizing an objective function over a large, nonconvex space; this objective function represents a performance metric which may be modeled, simulated, or (if possible) measured. Each such objective function incurs different tradeoffs in terms of speed, accuracy, and the strength of results that can be formally proven, and as a result requires its own optimization methods. In this dissertation, we describe optimization approaches for several performance objectives.
• In a simple abstract memory hierarchy model, we derive an unconditional communication lower bound for "projective" tensor operations (which includes most dense linear algebra) and convolutions. We show that this can always be attained (up to a constant factor) by solving a mathematical optimization problem, and that these methods provide significant benefits in practice, halving the communication volume in one case.
• We extend these lower bound techniques - and corresponding algorithms - to handle combinations of communication and convolution for randomized matrix multiplication.
• We then describe how to incorporate support for additional mapping decisions and to account for more architectural information by extending our optimization-based approach to support analytical performance models. While these performance models are too complex to prove strong lower bounds for, empirical results show that optimization-based methods can find more performant mappings using significantly less runtime and sample complexity than pure search-based methods.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-