Skip to main content
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Understanding Tradeoffs Among Algorithm Complexity and Performance


The Minimum Description Length (MDL) principle asserts that the best model given some data is the one that minimizes the combined cost of describing the model and the misfit between the model and data with a goal to maximize regularity extraction for optimal data compression, prediction, and communication.

We utilize the MDL principle to comprehend the relationship between the complexity or size of the problem/representation with respect to the model's performance and system resources. Evaluating a large number of possibilities will enable us to compare model tradeoffs and select the model with the best performance but the least complexity.

The complexity of the problem can be determined by factors such as the size of the representation, number of lines of code, amount of processing time, and understanding the order of computation. In some approaches, the order of magnitude is defined as the number of basic computations performed by the model. In a deep learning model, it can be defined as the number of nodes that the data are propagated through. In the proposed approach we introduce a criterion function that accounts for system resources through the use of number of FLOPs (floating-point operations) and allows an accurate representation of size of a model. This is combined with performance (error/accuracy) to estimate diverse tradeoffs.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View