by Johns Hopkins University School of Medicine
Top photo: Tim Evans, 62, during testing; the BCI is picking up his brain signals to turn off the light using a communication board. Photo courtesy of the Crone Lab. Bottom left: Evans sitting in the room at Johns Hopkins Medicine where testing is being done to record his brain signals to control a communication board. Bottom right: Shiyu Luo, first author of the study, is standing in front of the computer that is recording Evans’ brain signals during testing. Portrait photos courtesy of Caslon Hatch. Credit: Johns Hopkins Medicine; portraits: Caslon Hatch
It's the day after the Baltimore Orioles clinched the American League East Championship with their 100th win of the season, and lifelong fan Tim Evans is showing his pride on his sleeve.
"It's so great," Evans, 62, says with a huge smile, wearing his orange O's jersey.
The last time the Orioles won the AL East was in 2014, the same year Evans was diagnosed with amyotrophic lateral sclerosis (ALS), a progressive nervous system disease that causes muscle weakness and loss of motor and speech functions. Evans currently has severe speech and swallowing problems. He can talk slowly, but it's hard for most people to understand him.
However, Evans is getting a sense of control back, thanks to a brain-computer interface (BCI)—a direct communication pathway between the brain and external smart devices. The BCI device, called Cortical Communication (CortiCom), is surgically implanted on the surface of the brain areas responsible for speech and upper limb function.
Evans is participating in a clinical trial at Johns Hopkins Medicine, in collaboration with the Johns Hopkins University Applied Physics Laboratory, that is looking at a series of studies using the device in patients with severe speech and movement difficulties to regain some of the abilities lost due to neurological diseases.
By using the BCI with a special computer algorithm trained to translate his brain signals into computer commands, Evans is able to freely and reliably use a set of six basic commands (up, down, left, right, enter and back) to navigate among options on a communication board and to control smart devices like room lights and streaming TV applications.
The results, published in Advanced Science, show a sustained accuracy of 90% over a three-month period without requiring the BCI algorithm to be retrained or recalibrated.
"While past studies on speech BCI have focused on communication, our study addressed the need to control smart devices directly," says Nathan Crone, M.D., professor of neurology at Johns Hopkins Medicine, senior author of the study, and principal investigator of the clinical trial. "The BCI accurately recognized a small set of simple commands, allowing Tim to navigate a communication board and control household devices without needing a language model to fix errors."
"It's wonderful. I can turn on the TV and turn off the lights without getting up," Evans says. "I can see the possibilities for other patients."
How it works
In the summer of 2022, William Anderson, M.D., Ph.D., M.A., professor of neurosurgery, and Chad Gordon, D.O., professor of plastic and reconstructive surgery, both at the Johns Hopkins University School of Medicine, placed two electrocorticographic (ECoG) grids on the surface of Evans' brain. An ECoG grid is a thin sheet of electrodes (tiny sensors) with a footprint the size of a large postage stamp placed on a person's brain to record the electrical signals produced by thousands of brain cells (neurons).
Evans worked with the research team for several weeks to train the BCI to recognize his unique brain signals, repeating each of the six commands aloud as they appeared on a screen. Once the BCI's deep-learning algorithm was trained, Evans was asked to issue the same verbal commands to control a communication board in real time, usually for about five minutes every day for the next three months.
"While Tim's speech was difficult for most human listeners to understand, the BCI was able to accurately translate his brain activity into computer commands, allowing him to navigate to and select items on a communication board at his own pace," says Shiyu Luo, graduate student in biomedical engineering at The Johns Hopkins University and first author of the paper. "In addition to expressing how he was feeling or what he wanted, Tim was able to use the BCI to turn a light on and off, and to select videos to watch on YouTube."
Throughout testing, researchers discovered that using signals from both motor and sensory areas of the brain produced the best results. Brain areas related to the movement of the lips, tongue and jaw had the most influence on the BCI's performance, and stayed consistent over three months of the study, playing a crucial role in making the BCI work well and reliably, says Luo.
Different approach
Crone says that, unlike many other BCI studies, this approach used electrodes that do not penetrate the brain, allowing the team to record large populations of neurons from the surface of the brain instead of individual neurons.
"These population responses appear to be more stable over time," Crone says. "They don't change from day to day as much, so the BCI algorithm we used for controlling the computer interface did not require recalibration or retraining for at least three months."
Crone says not having to retrain the BCI algorithm means this approach potentially allows participants the freedom to use the BCI whenever and wherever they want without ongoing researcher intervention. In the future, hopefully this means a participant with severe paralysis can start their day by turning on the lights and catching up with the news on TV using only their brain signals.
"What's amazing about our study is that the accuracy didn't change over time, it worked just as well on Day 1 as it did on Day 90," Crone says. "Our results may be the first steps in realizing the potential for independent home use of speech BCIs by people living with severe paralysis."
What's next
One limitation of the study was the limited vocabulary used for decoding speech. Crone says that although the six commands they adopted were both intuitive and sufficient for controlling grid-based applications, a more comprehensive vocabulary may reduce the time needed to perform a broader range of tasks.
Currently, Crone and his team are working with Evans on a series of other studies that expand the vocabulary the BCI can pick up from his brain signals. They are also actively recruiting patients for clinical trials investigating BCI systems for those with movement and communication impairments.
"It's a very exciting time in the field of brain-computer interfaces because there are several companies investing in this technology," says Crone. "For those who have lost their ability to communicate due to a variety of neurological conditions, there's a lot of hope to preserve or regain their ability to communicate with family and friends. But there's still a lot more work to be done to bring this to all patients who could use this technology."
As for Evans, Crone says they are working on a plan to move the BCI equipment to his home, pending approval from the Food and Drug Administration, meaning Evans may soon have the ability to use the BCI on his own to watch the Orioles on TV without the help of others.
"My son has a bad habit of leaving the TV remote in the other room," Evans jokes.
More information: Shiyu Luo et al, Stable Decoding from a Speech BCI Enables Control for an Individual with ALS without Recalibration for 3 Months, Advanced Science (2023). DOI: 10.1002/advs.202304853
Journal information: Advanced Science
Provided by Johns Hopkins University School of Medicine
Post comments