Voice command devices, like Alexa and Siri, enable humans to engage, operate, and interact with technology thanks to the power of voice, but these technologies fail to account for the voiceless among us. Many people— including those suffering from neurodegenerative diseases, paralysis, or traumatic brain injuries— are unable to take advantage of such voice-user interface (VUI) devices. That’s where Facebook Reality Labs (FBR) comes in.
FBR has partnered with neuroscience professionals at UCSF to give a voice back to the voiceless by attempting to create the first non-invasive, wearable brain-computer interface (BCI) device for speech. This device would marry “the hands-free convenience and speed of voice with the discreteness of typing.” Although BCI technology is not new, the creation of BCI technology capable of converting imagined speech into text, without requiring implanted electrodes, would be.
In a recently successful—albeit limited—study, UCSF researchers demonstrated that brain activity (recorded while people speak) could be used to decode what people were saying into text on a computer screen in real-time. However, at this time, the algorithm can only decode a small set of words.
Although promising, such results are preliminary, and researchers have a long way to go until the power of this silent speech interface technology can be harnessed non-invasively and in wearable form. What is more, researchers believe this BCI technology “could one day be a powerful input for all-day wearable [augmented reality (AR)] glasses.”
Why it’s hot
Such a radical innovation would not only help those who can’t speak, it could alter how all people interact with today’s digital devices.