Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Oscillatory Neural Systems

Abstract

The brain, while being small, low-power, and robust, performs complex computations that we cannot yet replicate or fully understand. Oscillatory signals are ubiquitously observed in the brain across multiple scales, e.g., from individual neural membranes to large-scale averages measured in electroencephalograms. Explaining the computational function and generation of brain oscillations is an active area in neuroscience. Computation with oscillatory signals is also interesting from an engineering standpoint in the field of analog computing. Digital computing represents objects with discrete, Boolean variables. In contrast, analog computing investigates how to use the continuous dynamics of physical systems to perform fast, energy-efficient computing. The potential advantage of analog computing has been hard to realize due to the challenges of working with analog systems and competing with rapid advances in digital computing. Recently, neuromorphic computers and coupled oscillator networks have shown potential as efficient analog computers for certain applications. Thus, motivated by neuroscience as well as engineering, here we explore computations in oscillatory systems that efficiently perform specific functions. Our results demonstrate that models of computation using oscillator neural networks can be used as tools for neuroscience and as the basis of efficient analog computers. Specifically, we investigate inference in feedforward deep neural networks, the analog implementation of associative memories, and optimization performed through the dynamics of coupled oscillator networks.

Presumably, the function of a brain critically relies on a combination of continuous and discrete signals, e.g., membrane voltages in neurons and their averages, local field potentials, and action potentials or spikes. Neuromorphic computers that use this combination of signaling are emerging as alternatives to traditional computers for certain tasks. How information is encoded by spiking neural activity can impact key efficiency metrics, such as the number of transmitted spikes required to perform a calculation. Chapter 2 proposes an efficient coding method for implementing deep artificial neural networks in which the times of spikes encode the phase in an ongoing rhythm. The proposed phase code is advantageous because it uses significantly fewer spikes per neuron for each calculation than a rate code, the most common encoding method in neuromorphic computing. In addition, we present results obtained from an implementation of phase-coded deep neural networks on neuromorphic hardware.

Networks of coupled oscillators are being investigated for efficient implementation of machine learning and artificial intelligence algorithms, such as associative memory. Chapter 3 presents new models of associative memories implemented in networks of coupled oscillators using discretized Q-state phase codes. We show that the memory capacity for 3-state phase codes is significantly higher than for traditional binary 2-state codes. Further, we present a new oscillator model that is capable of implementing Q-state and continuous associative memories with sparse activity patterns.

The use of Ising machines, i.e., large networks of interacting 2-state elements, have been proposed as a way for finding near-optimal solutions to combinatorial optimization problems. We argue that current Ising machines are limited due to a focus on second-order, or pairwise, interactions. Chapter 4 explores new methods for finding solutions to combinatorial optimization problems through the use of oscillator Ising machines with higher-order interactions, referred to as higher-order oscillator Ising machines. We present results comparing second-order oscillator Ising machines to higher-order oscillator Ising machines from solving benchmark optimization problems. We show that for benchmark satisfiability problems, higher-order Ising machines require fewer optimization variables and network connections. In addition, we show that higher-order Ising machines find solutions that satisfy a greater fraction of problem constraints compared to existing methods.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View