Skip to main content
eScholarship
Open Access Publications from the University of California

Micro-Architecture and Systems Support for Emerging Non-Volatile Memories

  • Author(s): Bhaskaran, Meenakshi Sundaram
  • Advisor(s): Swanson, Steven
  • et al.
Abstract

Emerging non-volatile memory technologies such as phase-change memory, resistive random access memory, spin-torque transfer memory and 3D XPoint memory promise to significantly increase the I/O sub-system performance. But, current disk-centric systems fall short in taking advantage of the bandwidth and latency characteristics of such memories. This dissertation presents three systems that address: hardware, system software and micro-architecture support for faster-than-flash non-volatile memories.

First, we explore system design for using emerging non-volatile memories (NVM) as a persistent cache that bridges the price and density gap between NVMs and denser storage. Bankshot is a prototype PCIe-based intelligent cache with access latencies an order of magnitude lower than conventional SSDs. Unlike previous designs of SSD caches, Bankshot relies on the OS for heavyweight operations such as servicing misses and write-backs while allows cache hits to bypass the operating system (OS) and its associated software overhead entirely.

Second, we extend the ability to define application specific interface to emerging NVM SSDs such that a broad range of applications can benefit from low-latency, high-bandwidth access to the SSD’s data. Our prototype system, called Willow, supports concurrent execution of an application and trusted code within the SSD without compromising on file system protections. We present three SSD apps - Caching, Append and zero-out that showcase Willows capabilities. Caching extends Willows semantics to use the SSD storage as a persistent cache while file-append and zero-out extends the semantics for file system operations.

Finally, we address the challenge of accessing byte-addressable, emerging NVMs with higher than DRAM latency when attached to the processor memory bus; specifically for loads. We propose Non-Blocking Load (NBLD), an instruction set extension to mitigate pipeline stalls from long-latency memory accesses. NBLD is a non-blocking instruction that brings data into the upper levels of the cache hierarchy, however, unlike prefetch instructions, NBLD triggers the execution of application-specific code once data is resident in the cache, effectively hiding the latency of the memory.

Main Content
Current View