COMING SOON: A New Way to Earn Passive Income with DeFi in 2025 LEARN MORE

AI-driven BCI technology enables real-time speech for stroke survivor after 18 years

In this post:

  • University of California researchers used a brain-computer interface to turn a 47-year-old woman’s brain signals into real-time speech after 18 years of silence.
  • According to the researchers, the system harnesses technology similar to that of devices like Alexa and Siri and improves on a previous model.
  • The previous version delayed about eight seconds to decode her brain patterns and would speak full sentences at once.

Researchers from the University of California used a brain-computer interface (BCI) driven by AI to turn Anne Johnson’s brain signals into real-time speech since she went silent in 2005 after a stroke. The system harnessed technology similar to that of devices like Alexa and Siri and improved on a previous model that had an eight-second delay. 

Researchers from the University of California, Berkeley, and the University of California, San Francisco, developed a customized brain-computer interface system capable of restoring naturalistic speech to a 47-year-old woman with quadriplegia. Today, Anne is helping researchers at UC San Francisco and UC Berkeley develop BCI technology that could one day allow people like her to communicate more naturally through a digital avatar that matches facial expressions to the generated speech.

Gopala Anumanchipalli, an assistant professor of electrical engineering and computer sciences at UC Berkeley and a co-author of the study published Monday in the journal Nature Neuroscience, confirmed that the implanted device tested on Ann converted ‘her intent to speak into fluent sentences’. Jonathan Brumberg of the Speech and Applied Neuroscience Lab at the University of Kansas, who also reviewed the findings, welcomed the advances and told The Associated Press that this was ‘a pretty big advance in the Neuroscience field’.

BCI technology enables a woman to regain her speech after nearly 20 years

A woman paralyzed by a stroke regained her voice after nearly two decades of silence through an experimental brain-computer interface developed–and specifically customized to her case–by researchers at UC Berkeley and UC San Francisco. The research, published in Nature Neuroscience on March 31st, utilized artificial intelligence to translate the thoughts of the participant, popularly known as “Anne,” into natural speech in real-time.

See also  Elizabeth Warren links smartphones tariff relief to Tim Cook's $1 million inauguration donation

Anumanchipalli explained that the interface reads neural signals using a grid of electrodes placed on the speech center of the brain. He added that it was clear there were conditions—like ALS, brainstem stroke (like in Anne’s case), or injury—where the body became inaccessible, and the person was ‘locked in’, being cognitively intact but unable to move or speak. Anumanchipalli noted that while significant progress had been made in creating artificial limbs, restoring speech remained more complicated.

“Unlike vision, motion, or hunger—shared with other species—speech sets us apart. That alone makes it a fascinating research topic.”

Gopala Anumanchipalli

However, Anumanchipalli acknowledged that how intelligent behavior emerged from neurons and cortical tissue was still one of the big unknowns. The study used a BCI to create a direct pathway between Anne’s brain’s electrical signals and a computer.

New BCI device improves on previous versions that had delays

The innovative method by the U.S. researchers eliminated a frustrating delay that plagued previous versions of the technology by analyzing her brain activity in 80-millisecond increments and translating it into a synthesized version of her voice. A number of BCI speech-translation projects have produced positive results recently, each aiming to reduce the time taken to generate speech from thoughts.

According to Science Alert, most existing BCI methods required ‘a complete chunk of text’ to be considered before software could decipher its meaning, which could significantly drag out the seconds between speech initiation and vocalization.

See also  Top Trump official asks Europe to choose between US or Chinese tech

The report published by researchers from UC Berkeley and San Francisco disclosed that improving speech synthesis latency and decoding speed was essential for dynamic conversation and fluent communication. The joint UC team explained that BCI speech delays were compounded by the additional time speech synthesis required to play and the time listeners took to comprehend the synthesized audio.

Most existing methods reportedly relied on the ‘speaker’ training the interface by openly going through the motions of vocalizing, which would be a challenge when providing decoding software with enough data for individuals who were out of practice or had always had difficulty speaking. To overcome both of these hurdles, the UC researchers trained a flexible, deep-learning neural network on the 47-year-old participant’s “sensorimotor cortex activity” while she silently ‘spoke’ 100 unique sentences from a vocabulary of just over 1,000 words.

Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan