Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Analog Circuit Design and Optimization using Reinforcement Learning

Abstract

From the dawn of the current century, there has been an unprecedented growth in the usage of Integrated chips (IC) on mainframe computers and also edge devices. With smart devices and sensor circuits becoming ubiquitous, we are met with a growing demand to necessitate faster build time and optimization of ICs. Analog circuit design and optimization manifests as a critical phase in IC design, which still heavily relies on extensive manual designing by experienced human experts. It is only imperative that this process is time consuming for large scale circuits. This severely limits the turn around time of production of these circuits. Combined, with an ever growing demand, the problem presents a considerable bottleneck in production. The problem motivates building automated technologies that can automate the optimization and design techniques needed for analog circuits.

With the growth of technologies and algorithms in artificial intelligence and machine learning, there comes a new class of options to tackle problems in the field of circuit design and automation. In recent years, the development of reinforcement learning (RL) algorithms draws attention with related techniques being introduced into the analog design field for circuit optimization. However, for robust and efficient analog circuit design, a smart and rapid search for high-quality design points is more desired than finding a globally optimal agent as in traditional RL applications, which was a point not fully considered in some previous works. In this work, we propose three techniques within the RL framework aiming at fast high-quality design point search in a data efficient manner. In particular, we (i) incorporate design knowledge from experienced designers into the critic network design to achieve a better reward evaluation with less data; (ii) guide the RL training with non-uniform sampling techniques prioritizing exploitation over high quality designs and exploration for poorly-trained space; (iii) leverage the trained critic network and limited additional circuit simulation for smart and efficient sampling to get high-quality design points. The experimental results demonstrate the effectiveness and efficiency of our proposed techniques.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View