- Metzger, Sean;
- Littlejohn, Kaylo;
- Silva, Alexander;
- Seaton, Margaret;
- Wang, Ran;
- Dougherty, Maximilian;
- Wu, Peter;
- Berger, Michael;
- Zhuravleva, Inga;
- Tu-Chan, Adelyn;
- Ganguly, Karunesh;
- Anumanchipalli, Gopala;
- Chang, Edward;
- Moses, David;
- Liu, Jessie
Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participants pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.