- Main
Memcomputing Artificial Intelligence: Improving Learning Algorithms with Algorithms that Learn
- Manukian, Haik
- Advisor(s): Di Ventra, Massimiliano
Abstract
Like sentinels guarding a secret treasure, computationally difficult problems define the edge of what can be accomplished across science and industry. The current computing orthodoxy, based on physical realization of the Turing machine via the von Neumann architecture, has begun to stagnate, as seen in the slowing down of Moore’s ‘law’. Hardware has saturated around its architectural (von Neumann) bottleneck, leading to a growing interest in unconventional computing architectures like quantum computing and neuromorphic computing. In this thesis, we examine a new alternative, the brain-inspired computing paradigm, memcomputing, which computes with and in memory. The dynamical systems induced by digital memcomputing machines (DMMs) are applicable to a large family of optimization problems. DMMs use memory, or learn, from past dynamics to explore the phase space of the underlying problem in a drastically different way than typical algorithmic approaches, opening up many potential benefits from optimization to sampling.
Some of the most challenging computational problems (and biggest rewards) come from the attempt to instill intelligent behavior in machines, known as the field of artificial intelligence. The first chapter is a warm up with a hardware model of DMMs applied to numerical inversion problems, then we quickly move to the major theme throughout this work, which is the application of DMMs to computational demanding problems in artificial intelligence. The model we focus on is the Boltzmann machine, which is seen as an ancestor to the more popular feedforward neural networks. It no longer makes headlines not because it lacks capability, but rather our training methods lack the ability to train it, leaving much of its capacity unexplored. In this work we apply memcomputing to the training of Boltzmann machines through the development of mode-assisted training. This is a technique that uses memcomputing to sample the mode of the Boltzmann machine, and uses that to stabilize and improve the weight update algorithms. Mode-assisted training improves the performance of DBNs in a downstream supervised task, the unsupervised learning of RBMs as well as the joint training of deep Boltzmann machines, doing as well or better than networks containing two orders of magnitude more parameters.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-