Improving Brain–Machine Interfaces with Machine Learning


Brain–machine interfaces (BMIs) have enabled a handful of test participants who are unable to move or speak to communicate simply by thinking. An implanted device picks up the neural signals associated with a particular thought and converts them into control signals that are fed into a computer or a robotic limb. For example, a quadriplegic person is asked to think about moving a cursor on a computer screen. Once the BMI is trained to recognize the neurological activity as this intent, the person’s thought is transmitted via the BMI to carry out the action of moving the cursor. BMIs currently in an experimental phase may also consist of robotic limbs that can execute manual tasks as instructed by a disabled person’s thoughts alone.

The hardware necessary for this incredible feat is a computer—either freestanding or within a robotic device—and an implant in the brain of the person who is using BMI technology to communicate their intent through their thoughts. At Caltech, researchers use implants that consist of arrays of 100 microelectrodes mounted on a 4×4 mm chip. The microelectrodes are typically 1.5 mm long and penetrate into the brain’s cortex, where they can record the activity of individual neurons.

Unfortunately, the performance of these microelectrode arrays is not consistent and degrades over time. To overcome this challenge, Caltech’s Azita Emami, the Andrew and Peggy Cherng Professor of Electrical Engineering and Medical Engineering and director of the Center for Sensing to Intelligence (S2I), and her colleagues have used machine learning to effectively interpret the neuronal signals picked up by older implants.

“Not only do we observe day-to-day variations, but over time the performance of brain–computer interfaces degrade for a variety of reasons,” Emami says. “There may be a small movement of the implant or its electrodes. The electrodes themselves may deteriorate or become encapsulated in brain tissue. Some people think that over time the neurons move away from the implant because they react to it as a foreign object in the brain. For whatever reason, the signals we receive become noisier.”

When a BMI is first set up, the microelectrode array produces a signal characterized by strong action potentials that appear like spikes in the recordings. Once this strong signal is no longer detected by the microelectrode array—that is, when the feedback from the array becomes noisier and neural spikes are no longer clearly detected—it is a far trickier task to link a pattern of neural activity from more distant neurons to a specific intent that can be successfully transmitted to a computer or other device. Researchers have tried to identify alternative signals, such as so-called threshold crossings or local field potentials recorded from distant neurons. One approach has been to use wavelets that measure small oscillations in neuronal activity. But the success of wavelets and other methods has been limited.

Now Emami and her colleagues have found that by applying machine learning, BMIs can be trained to interpret data from neural activity even after the signal from an implant has become less clear. The algorithm created by the team to do this is called FENet, for Feature Extraction Network. Remarkably, it can be trained on data from one patient and then used successfully in another. “This means that there is some fundamental type of information in the neural data that we are picking up,” Emami says. Not only that, FENet can generalize across different brain regions and types of electrodes and be easily incorporated into existing BMIs.

Richard Andersen, the James G. Boswell Professor of Neuroscience and leadership chair and director of the T&C Chen Brain-Machine Interface Center, says “FENet has already extended our clinical study with JJ by two years. BMI research is a perfect field for interdisciplinary research, in this case melding the disciplines of engineering, computer science, and neuroscience.”

This research was published in Nature Biomedical Engineering under the title “Enhanced control of a brain-machine interface by tetraplegic participants via neural-network-mediated feature extraction.” Co-authors include Andersen; Emami; Haghi; Albert Yan Huang from the Emami lab; members of professional staff Tyson Aflalo and Spencer Kellis, postdoc Jorge A. Gamez de Leon, and graduate student Charles Guan from the Andersen lab; and Nader Pouratian from UCLA Neurosurgery. Funding was provided by the National Institutes of Health, Caltech’s S2I, the T&C Chen Brain-Machine Interface Center at Caltech, the Boswell Foundation, the Braun Foundation, and the Heritage Medical Research Institute.

Read more on the TCCI for Neuroscience website