Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Charge-trap transistors for neurmorphic computing

  • Author(s): Gu, Xuefeng
  • Advisor(s): Iyer, Subramanian S
  • et al.

As the demand for energy-efficient cognitive computing keeps increasing, the conventional von Neumann architecture becomes power/energy prohibitive. Brain-inspired, or neuromorphic computing has been extensively investigated in the past three decades because of its distributed memory/processors and massive connectivity, which promise low-power operation. One critical device in such a system is the synapse – a local memory which stores the connectivity between neurons. Many devices, such as resistive memory, phase-change memory, ferroelectric field- effect-transistor, and flash memory, have been suggested as candidates for analog synapses. In this work, the use of a CMOS-only and manufacturing-ready candidate – the charge-trap transistor (CTT), is investigated.

The analog programming characteristics of CTTs most pertinent to neuromorphic applications will first be investigated. In particular, the analog retention, the fine-tuning of individual CTTs, the spike-timing dependent plasticity, the weight-dependent plasticity, and the device variation will be discussed. The implications learned from this part serve as the basic understanding for subsequent chapters using CTTs for neuromorphic applications.

Next, two algorithms for unsupervised learning, namely, winner-takes-all (WTA) clustering and temporal correlation detection, are investigated, using CTTs as the analog synapses. For each algorithm, the feasibility of hardware implementation using CTTs as the analog synapses is first studied and system performance evaluated using experimentally measured CTT characteristics. Experimental demonstration is then presented using custom-built CTT arrays in the 22 nm fully depleted silicon-on-insulator (SOI) technology.

Finally, the use of CTTs for analog synapses in an inference engine is considered. The fine- tuning of CTT weights in an array setting is first examined as it is anticipated to be different from that of discrete devices because of the half-selection and thermal disturbance by adjacent cells. The achieved standard deviation of the difference between the target and the actually programmed weight is as low as 6% of the dynamic range. The programmed CTT array is then used as a dot product engine, the key to an inference engine. Implications of the imperfect array programming in the accuracy of an inference engine are then discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View