At present, the mind-controlled robot is not a new concept for scientists. But despite numerous technological advancements, mind-controlled robots are still placed in a developmental phase. The way to control robots through the brain is still poorly understood by the scientists. However, soon the scenario is going to change as a team of scientists from Massachusetts Institute of Technology (MIT) has come up with a new method to control robots through brain signals.
When the matter comes to control robots, it is not just a matter of following programs and instructing them, but of ensuring that they’re following those commands correctly. Direct control of robots from the brain has often been chaotic for researchers. But the new method developed by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University may kick out the age-old difficulties of a mechanical interface or instruct a robot to retort to voice instructions. To get rid of the setback of automated inference, the MIT scientists have developed a system that converts the machinist into a human automatic fault detector for alerting the robot whenever it makes a mistake.
The research team followed the data from an electroencephalography (EEG) monitor that used for recording brain activities, enabling the system to notice if a person comes across any error while the robot is performing an object-sorting job. The machine-instructing algorithm allows the system to categorise brain waves in the space of 10 to 30 milliseconds and hence detecting the glitches and faults done by robots while carrying out a task.
According to the brains behind this development, “All the previously held researches, based on trials to control robots involved the connection of the operator with an electroencephalography (EEG) monitor, followed by the procedure of commanding the systems to command the robots by thinking in any definite and carefully prescribed ways. But with this method, the problem was that it was a one-way method as well as shattering as it needed constant attention.
But the new system is independent and can make out the faults of robots following the signals transmitted from human brain. The method can also be employed for assisting those people who are unable to communicate verbally through a series of distinct binary options. Moreover, the development of the new system also emerges the possibility of embryonic of effectual gadgets for brain-controlled prostheses.
According to Daniela Rus, the Director of Computer Science and Artificial Intelligence Laboratory (CSAIL), “When you look at the robot, all you need to do is psychologically concur or conflict with what the robot is carrying out. You don’t need to think in a particular or trained way for agreeing or disagreeing with the tasks of the robot. What you think will be adapted by the machine and will instruct the robot to rectify the glitches.”