Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations bannerUC Santa Cruz

Design and Training of Memristor-Based Neural Networks

  • Author(s): Jia, Xiaoyang
  • Advisor(s): Kang, Sung-Mo
  • et al.
Creative Commons 'BY' version 4.0 license
Abstract

Modern Artificial Neural Network(ANN) is a kind of nonlinear statistical data modeling tool, which can be optimized by a learning method based on mathematical statistics. Therefore, it is a practical application of mathematical statics. The ANN can get abilities to make simple decisions and judgments, just like a human brain. It is superior to formal logical implication.

Since the advancement of the ANN study significantly depends on the expansion of networks in-depth, a massive amount of vector-matrix multiplications is required [3]. Thus, energy efficiency is a key factor in evaluating the performance of ANNs. Since researches on vector-matrix multiplications have made great achievements, large and deep ANNs have been used to handle complex tasks and process massive data. The memories utilized by conventional ANNs, such as Static Random Access Memory (SRAM) and Flash memory are charge-based, which is not efficient in view of energy consumption during ANN computation since it cannot directly implement vector-matrix multiplication in a crossbar array structure [3]. Over the past decade, the Nonvolatile Memory (NVM) crossbar array has shown its superiority on improving the energy efficiency. Unlike the conventional memory, NVM is current based. NVM crossbar array can calculate matrix multiplication in a single step by sampling the current flowing, therefore, it was utilized as the analog vector-matrix multiplier for on ANN [3]. However, the nonlinear I-V characteristics of NVM put hard constrains on critical design parameters, such as the read voltage and the range of weight, which causes substantially reduced accuracy.

In this work, we built an ionic floating-gate (IFG) memory unit device model on Cadence based on two presented models, ENODe and ionic floating-gate memory (IFG) memory [2]. The IFG model consists of a polymer redox transistor connected to a conductive-bridge memory (CBM) [2]. We would like to apply this IFG memory unit device to build Memristor-based Neural Networks (MNN) and explore the performance of the networks. In the MNNs, the selective and linear programming of a redox transistor array is executed in parallel by overcoming the bridging threshold voltage of the CBMs [2], which can improve the performance (accuracy). This thesis is mainly about the design and training of the MNNs. Since we would like to apply the MNNs to perform digital number recognition, we designed the architecture that is available to recognize Modified National Institute of Standards and Technology (MNIST) handwriting digits and trained the MNN with MNIST train dataset, then tested the performance (accuracy) by MNIST test dataset. In addition, we tried to make some handwriting images by ourselves and used them to test the trained network to consolidate our conclusion. We also attempted to resize the original MNIST dataset by an image data pixel converting algorithm and make the newly created train dataset applicable to train an MNN with smaller scale. We used both the MNIST test dataset and the handwritten image data newly created to test the network and compared the performance with that of the original one. We hope that the IFG memory unit device can impose a significant impact on the MNN design.

Main Content
Current View