Ever wondered how would it feel to type something on a computer without moving your fingers and simply thinking about it? Well, researchers are trying to mimic the same and have done considerable progress. According to the doctors at the University of California funded by Facebook Reality Labs, they have created a brain-computer interface or BCI that has been able to understand a few patterns that people make when they utter specific words or phrases that can be converted into text.
The software has been trained to work just a handful of words and phrases, although as the technology progresses, researchers will be able to make the software more powerful and can help understand what people would disabilities as paralysis think. At the moment, people will illnesses like paralysis use virtual keyboard that they control using eye movements and muscle twitches to converse which is a time-consuming method. However, Edward Change who is a neurosurgeon and the lead researcher on the study that was recently published in the journal Nature Communications states that there’s no prosthetic system that has been able to pull off rapid interactions between humans.
Researchers implanted explodes in the brain of three epilepsy patients who were going for neurosurgery and scanned their brains for a week to capture the origins of seizures. Furthermore, these patients will give a set of nine questions with 24 potential responses either in words or phrases. The computer software caught the responses and converted into computer models that would help ascertain the patterns associated with each question and their answers.
According to David Moses, a researcher on the team, this is the first time this approach has been used to identify words and phrases that brain carves out in the form of patterns. The system is currently limited to a very small vocabulary, however, as the study progresses, the vocabulary will increase with accuracy and other metrics as well.
According to Facebook, the project was announced back in 2017 with the sole goal to decode silent speech and soon, the social media giant joined forces with researchers at the University of California in San Francisco to conduct this experiment.
The research also found out that the readings recorded by the electronics were 61% accurate. Facebook is looking forward to expanding the vocabulary to achieve real-time decoding speed of 100 words per minute with a word error rate of less than 17% and in fact, it is ready to sustain the study even if it takes a decade or so. This study can help form the basis of Facebook’s AR glasses in the future that would enable users to communicate without the need to have smartphones.