- Main
Dynamical Systems and Neural Networks: From Implicit Models to Memory Retrieval
- Jaffe, Sean Isaac
- Advisor(s): Singh, Ambuj K;
- Bullo, Francesco
Abstract
The field of machine learning has seen great strides in the past two decades, leading to groundbreaking advancements in various domains. Yet many challenges remain in AI research. Scalability remains a persistent hurdle, with current models struggling to efficiently manage increasing amounts of data and complexity. The challenge of scalability is further highlighted by the power of the human brain, which, with its remarkable efficiency, underscores the limitations of current machine learning models. Finally, both industry and academia have faced considerable difficulties in advancing robotics, particularly in enabling neural networks to interact effectively with the physical world.
In this work, we explore how the lens of dynamical systems can address all three of these challenges. We tackle memory constraints by framing a quantization algorithm as a fixed-point equation, enabling efficient differentiation. We investigate an energy-based, biological model of human memory and reinterpret the widely used self-attention mechanism through this model. Lastly, we leverage contraction theory to train a neural network that can follow a trajectory with stability and robustness. In the first two problems, we utilize dynamical systems to differentiate neural networks, while in the latter, we employ neural networks to learn a dynamical system. Through our understanding of dynamical systems, we build on theoretical advancements and practical applications in AI, offering new insights into both memory optimization and robust robotic manipulation.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-