Talk from Alexandre Gramfort - Meta, Reality Labs
Short abstract:
Since the advent of computing, humans have sought interfaces for computer input that are expressive, intuitive, and universal. While diverse modalities have been developed, including keyboards, mice, and touchscreens, each requires interaction with an intermediary device that poses constraints, especially in mobile scenarios. Gesture-based interaction systems using cameras or inertial sensors support more natural interaction schemes but constrain users with cumbersome head-mounted camera systems or a confined field of view. Brain computer interfaces (BCIs) have been imagined for decades to solve the interface problem by allowing for input to computing devices at the speed of thought. However high-bandwidth communication has only been demonstrated using invasive BCIs with interaction models designed for single individuals, an approach that cannot scale to the general public. In this talk, I will describe the development of a noninvasive neuromotor interface that allows for computer input using surface electromyography (sEMG). I will give examples where by training machine learning models on thousands of participants, it is possible to develop generic sEMG neural network decoding models that work across many people without the need for per-person calibration, hence offering the first high-bandwidth neuromotor interface that directly leverages biosignals with performant out-of-the-box generalization across people.