Next-generation AI: From Algorithm to Device Perspectives
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Next-generation AI: From Algorithm to Device Perspectives

Abstract

In recent years, neural networks have contributed significantly to the advancement of machine learning, achieving state-of-the-art over a broad range of challenging tasks. The world right now is seeing a global artificial intelligence (AI) revolution involving academic and industry alike: tech giants like Google and Microsoft are applying machine learning in their commercial products, while professors from every discipline- computer science, engineering, mathematics, biology, transportation - scrambling to apply these methods to advance their research. Stock analysts are using AI to analyze and predict stock prices, medical experts to diagnose and develop new drugs, while game developers create sophisticated, human-like behavior in characters. At the national level, both NSF and DARPA have identified AI as one of the major national research directions. Our research targets the advancement of next-generation AI from three vertical aspects along the computing hierarchy: At the algorithm level, we propose the use of application-specific, bio-inspired neural networks for information processing. We develop models of specialized audio and visual neurons that are compatible with existing algorithms; and optimize neural architectures containing these neurons to understand their role in creating an efficient network. At the hardware level, we address the memory bottleneck in AI accelerators. We propose two schemes to overcome limitations caused by variation in critical path and fabrication processes. At the single device level, we recognize the significant performance gain from devices that compose AI computation via physical mechanisms. We propose two spintronic structures capable of computing convolutions that achieve orders of magnitude higher efficiency than state-of-the-art technology. These innovations provide the foundation for higher performance and more efficient AI at different temporal points throughout the coming decade: in the short term, algorithms that can be implemented immediately; in the mid-term, hardware designs that can be realized in a few years; and in the long term, new device technologies to be adopted as the fabric of AI computation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View