A Power Efficient Speculative Fetch Architecture
Skip to main content
eScholarship
Open Access Publications from the University of California

A Power Efficient Speculative Fetch Architecture

Abstract

Low-power research has flourished recently, in an attempt to address packaging and cooling concerns of current microprocessor designs, as well as battery life for mobile computers. In this paper, we propose and evaluate a high performance, energy efficient front-end architecture. We outline a low-power instruction cache configuration, and examine the benefit of decoupling the tag component of our cache from the data component. This decoupling enables a sophisticated cache replacement algorithm and an accurate, but energy efficient, mechanism for speculative fetching. Cache blocks are initially verified by the tag component of the cache -- those that miss in the cache can be speculatively brought in from lower levels of the memory hierarchy. We introduce and analyze a number of auxiliary structures that enhance this design and maintain consistency between the time of verification and the actual cache block lookup. Our power efficient cache configuration provides a 43% reduction in energy dissipation over a comparable prefetching scheme for the benchmarks we examine, while attaining slightly better performance (measured in instructions per nanosecond).

Pre-2018 CSE ID: CS2000-0657

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View