Brain implant helps woman with paralysis speak with her own voice again
The new method decodes brain signals while simultaneously feeding them through a text-to-speech AI model. The post Brain implant helps woman with paralysis speak with her own voice again appeared first on Popular Science.

Researchers have developed a new method for intercepting neural signals from the brain of a person with paralysis and translating them into audible speech—all in near real-time. The result is a brain-computer interface (BCI) system similar to an advanced version of Google Translate, but instead of converting one language to another, it deciphers neural data and transforms it into spoken sentences.
Recent advancements in machine learning have enabled researchers to train AI voice synthesizers using recordings of the individual’s own voice, making the generated speech more natural and personalized. Patients with paralysis have already used BCI to improve physical motor control function by controlling computer mice and prosthetic limbs. This particular system addresses a more specific subsection of patients who have also lost their capacity to speak. In testing, the paralyzed patient was able to silently read full text sentences, which were then converted into speech by the AI voice with a delay of less than 80 milliseconds.
Results of the study were published this week in the journal Nature Neuroscience by a team of researchers from the University of California, Berkeley and the University of California, San Francisco.
“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” UC Berkeley professor and co-principal investigator of the study Gopala Anumanchipalli said in a statement. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.”
How researchers analyzed brain signals
Researchers worked with a paralyzed woman named Ann, who lost her ability to speak following an unspecified accident. To collect neural data, the team implanted a 253-channel high-density electrocorticography (ECoG) array over the area of her brain responsible for speech motor control. They recorded her brain activity as she silently mouthed or mimed phrases displayed on a screen. Ann was ultimately presented with hundreds of sentences, all based on a limited vocabulary of 1,024 words. This initial data collection phase allowed researchers to begin decoding her thoughts.
“We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,” study co-author Cheol Jun Cho said in a statement. “So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles.”
The decoded neural data was then processed through a text-to-speech AI model trained on real voice recordings of Ann from before her injury. While various tools have long existed to help individuals with paralysis communicate, they are often too slow for natural, back-and-forth conversation. The late theoretical physicist Stephen Hawking, for example, used a computer and voice synthesizer to speak, but the system’s limited interface allowed him to produce only 10 to 15 words per minute. More advanced brain-computer interface (BCI) models have significantly improved communication speed, but they have still struggled with input lag. A previous version of this AI model, developed by the same research team, for instance, had an average delay of eight seconds between decoding neural data and producing speech.
Related: [This cap is a big step towards universal, noninvasive brain-computer interfaces]
This latest breakthrough reduced input delay to less than a second—an improvement researchers attribute to rapid advancements in machine learning across the tech industry in recent years. Unlike previous models, which waited for Ann to complete a full thought before translating it, this system “continuously decodes” speech while simultaneously vocalizing it. For Ann, this means she can now hear herself speak a sentence in her own voice within a second of thinking it.
A video demonstration of the clinical trial shows Ann looking at the phrase “you love me” on a screen in front of her. Moments later, the AI model—trained on her own voice—speaks the words aloud. Seconds after that, she successfully repeats the phrases “so did you do it” and “where did you get this?” Ann reportedly appreciated that the synthesized speech sounded like her own voice.
“Hearing her own voice in near-real time increased her sense of embodiment,” Anumanchipalli said.
Brain computer interfaces are leaving the laboratory
This advancement comes as BCIs are gaining public recognition. Neuralink, founded by Elon Musk in 2016, has already successfully implanted its BCI device in three human patients. The first, a 30-year-old man named Noland Arbaugh with quadriplegia, says the device has allowed him to control a computer mouse and play video games using only his thoughts. Since then, Neuralink has upgraded the system with more electrodes, which the company says should provide greater bandwidth and longer battery life. Neuralink recently received a special designation from the Food and Drug Administration (FDA) to explore a similar device aimed at restoring eyesight. Meanwhile, Synchron, another leading BCI company, recently demonstrated that a patient living with ALS could operate an Apple Vision Pro mixed reality headset using only neural inputs.
“Using this type of enhanced reality is so impactful and I can imagine it would be for others in my position or others who have lost the ability to engage in their day-to-day life,” a Synchron patient with ALS named Mark said in a statement. “It can transport you to places you never thought you’d see or experience again.”
Though the field is mostly dominated by US startups, other countries are catching up. Just this week, a Chinese BCI company called NeuCyber NeuroTech announced it had inserted its own semi-invasive BCI chip into three patients over the past month. The company, according to Reuters, plans to implant its “Beinao No.1” device into 10 more patients by the end of the year.
All of that said, it will still take time before BCIs can meaningfully bring back conversational dialogue in day-to-day life for those who no longer have the capacity for speech. The California researchers say their next steps involve improving their interception methods and AI models to better reflect changes in vocal tone and pitch, two elements crucial for communicating emotion. They are also working on bringing their already low latency down even further.
“That’s ongoing work, to try to see how well we can actually decode these paralinguistic features from brain activity,” UC Berkeley PhD student and paper co-author Kaylo Littlejohn said.
The post Brain implant helps woman with paralysis speak with her own voice again appeared first on Popular Science.