Skip to content

A neural brain implant offers almost immediate speech

    Delays and dictionaries

    A year after the Stanford work, in 2024, the Stavisky team published its own research into a brain-text system that stuck the accuracy to 97.5 percent. “Almost every word was correct, but communicating about text can be restrictive, right?” Said Stavisky. “Sometimes you want to use your voice. With this you can subject this, it makes it less likely that other people interrupt you – you can sing, you can use words that are not in the dictionary.” But the most common approach to generating speech was based on synthesizing this from text, which led directly in another problem with BCI systems: very high latency.

    In almost all BCI speech aids, sentences appeared on a screen after a considerable delay, long after the patient finished merging the words in their mind. The part of the speech synthesis usually happened after the text was ready, which caused even more delay. Brain-to-Text solutions also suffered on a limited vocabulary. The last system of this kind supported a dictionary with around 1,300 words. When you tried to speak another language, you use more extensive vocabulary, or even say the unusual name of a cafe around the corner, the systems failed.

    So Wairagkar designed her prosthesis to translate brain signals into sounds, not in words – and to do it in real time.

    Extracting

    The patient who agreed to participate in Wairagkar's investigation was Codenaam T15 and was a 46-year-old man who suffered as a sorrow. “He is seriously paralyzed and when he tries to speak, he is very hard to understand. I have known him for a few years, and when he speaks, I may understand 5 percent of what he says,” says David M. Brandman, a neurosurgeon and co-author of the study. Before he worked with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.