Noninvasive Brain-Computer Interface Decoded via Deep-Learning

PNAS Nexus

Brain-computer interfaces (BCIs) have the potential to make life easier for people with motor or speech disorders, allowing them to manipulate prosthetic limbs and employ computers, among other uses. In addition, healthy and impaired people alike could enjoy BCI-based gaming. Non-invasive BCIs that work by analyzing brain waves recorded through electroencephalography are currently limited by inconsistent performance. Bin He and colleagues used deep-learning decoders to improve a BCI's performance responding to a user who is engaged in the task of tracking an object in two-dimensional space with a cursor. Twenty-eight adult participants were instructed to imagine moving their right hand to move the cursor right and their left hand to move the cursor left, moving both hands simultaneously to move upwards, and moving neither hand to go down, to enable continuous and sustained movement of a virtual object. The authors evaluated two different deep-learning architectures and a traditional decoder over seven BCI sessions. Both deep-learning decoders improved throughout the study and outperformed the traditional decoder by the final session. With the aid of deep-learning based decoders, human participants were able to control a fast and continuously moving computer cursor using an AI-powered non-invasive BCI that is solely based on sensor-space brain waves, tracking randomly moving objects with a high level of performance without moving a muscle, an accomplishment that could be an initial step towards neuro-assistive robotics, according to the authors.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.