- Main
Sequence Learning for Brain Computer Interfaces
- Elango, Venkatesh
- Advisor(s): Gilja, Vikash
Abstract
A fundamental challenge in designing brain-computer interfaces (BCIs) is decoding behavior accurately from time-varying neural oscillations. Studies using BCIs to function as communication prosthesis have demonstrated the plausibility of using these systems for recording neural signals over the long term as well as the ability to decode user intention from these signals. In most scenarios, the decoder used in a BCI is trained specifically for a subject and also has to be trained for every session of use with limited training data. Given these dataset size constraints, the class of decoding algorithms typically explored have restricted complexity, often limited to linear models that process neural signals within a fixed duration. However, such constraints can limit the practicality and usability of BCIs.
In this thesis, we investigate the utility of sequential models for decoding behavior from neural signals. To that end, we describe a robust, scalable approach for decoding sequences of neural signals using Long Short-Term Memory (LSTM) networks that work well even when training data is limited. The efficacy of our approach is demonstrated by decoding finger flexion from neural data collected from 4 subjects implanted with electrocorticographic (ECoG) electrode arrays. We also present an architecture for sequence transfer learning, which is able to learn a general representation of the sequential data across subjects, and show that it is able to achieve significant improvements over the state-of-the-models. We believe that these techniques using sequence learning and sequence transfer learning could be applied to the development of many neural systems and may help enable higher performance BCIs.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-