Scalable and Analog Neuromorphic Computing Systems
Recent developments in artificial intelligence (AI) have been possible due to the increased computing power of the hardware. However, the systems are mainly digital and are optimized for fast, accurate, and versatile computing. Analog computing systems are attractive for their energy-efficiency and throughput in AI applications. In this dissertation, we explore and optimize a conventional CMOS transistor, the charge-trap transistor (CTT), as an analog in-memory computing unit for neural networks. In addition, to adapt to the finite variation of the analog devices and circuits, we develop novel methods to characterize and improve the resiliency of neural networks deployed on analog computing systems. Furthermore, as the scaling of the network plays a crucial role in enhancing its capability, this dissertation evaluates advanced system scaling technologies to scale out the analog computing hardware in a scalable non-von Neumann architecture. Finally, our findings are brought together and realized by the hardware demonstration of an analog neuromorphic system. We conclude with the characterization result of the system and discuss several future directions for scalable and analog neuromorphic systems.