Skip to main content
eScholarship
Open Access Publications from the University of California

Mixed Signal Neurocomputing Based on Floating-gate Memories

  • Author(s): Guo, Xinjie
  • Advisor(s): Strukov, Dmitri
  • et al.
Abstract

Nervous systems inspired neurocomputing has shown its great advantage in object detection, speech recognition and a lot of other machine-learning technology-driven applications from speed and power efficiency. Among handful neurocomputing implementation approaches, analog nanoelectronic circuits are very appealing because they may far overcome digital circuits of the same functionality in circuit density, speed and energy efficiency. Device density is one of the most essential metrics for designing large-scale neural networks, allowing for high connectivity between neurons. Thanks to the high-density nature of traditional memory applications, building artificial neural networks with hybrid complementary metal oxide semiconductor (CMOS)/memory devices would enable the high parallelism as well as achieve the performance advantages.

Synapses, the most numerous elements of neural networks, are efficiently implemented by memory devices. This application, however, imposes a number of requirements, such as the continuous change of the memory resistance state, creating the need for novel engineering approaches. Here we report such engineering approaches for advanced

commercial 180-nm ESF1 and 55-nm ESF3 NOR flash memory, facilitating fabrication and successful test of high performance analog vector-by-matrix multiplication which is the key operation performed at signal propagation through any neuromorphic network. Furthermore, we discuss the recent progress toward neuromorphic computing implementations based on nonvolatile floating-gate devices, in particular the experimental results for a prototype 28×28-binary-input, 10-output, 3-layer neuromorphic network based on arrays of highly optimized embedded nonvolatile floating-gate cells. The fabricated neuromorphic network’s active components, including 101,780 floating-gate cells, have a total area below 1 mm2. The network has shown a 94.7% classification fidelity on the common MNIST benchmark, close to the 96.2% obtained in simulation. The classification of one pattern takes sub-1 μs time and sub-20 nJ energy – both numbers much better than for the best reported digital implementations of the same task. Estimates show that a straightforward optimization of the hardware, and its transfer to the already available 55-nm technology may increase this advantage to more than 100X in speed and 10000X in energy efficiency.

As pure analog circuits cannot address the noise accumulation problem, a practical solution would also require inclusion of analog-to-digital and digital-to-analog stages for signal restoration. High energy-efficient and compact data converters are therefore expected to play an important role in future computing platforms. We perform an experimental demonstration of 6-bit digital-to-analog (DAC) and 4-bit analog-to-digital conversion (ADC) operations implemented with a hybrid circuit consisting of Pt/TiO2-x /Pt resistive switching devices (also known as ReRAMs or memristors) and a CMOS operational amplifier (opamp). In particular, ADC is implemented with a Hopfield neural network circuit.

Main Content
Current View