Smartphones are now ‘showing’ their smartness to blind people as well. Specialists in computer vision and machine learning based at the University of Lincoln, UK, revealed that they are developing a new adaptive mobile technology that will enable the blind people to ‘see’, using their smart device.
Reports suggest that the project has been funded by a Google Faculty Research Award. The project is aiming to embed a ‘Smart Vision System’ in the mobile handsets that will help people having sight problem in navigating through indoors.
Revealing the further planning, the team said that the Smart Vision System will be based on color and depth sensing technology that will be installed in new smartphones and tablets. It will enable 3D mapping and localization, navigation and object recognition to the smart devices.
Later on, a user- friendly interface has to be developed that will relay that to users – whether that is vibrations, sounds or the spoken word.
The research team includes project lead Dr. Nicola Bellotto, an expert on machine perception and human-centered robotics from Lincoln’s School of Computer Science; Dr. Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies; and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception; aim to develop a system that will make the blinds self capable in recognizing visual clues in the environment.
“This project will build on our previous research to create an interface that can be used to help people with visual impairments,” Bellotto revealed. “There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability.”
“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability.”
“There are also existing smartphone apps that are able to, for example, recognize an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited.”
“We aim to create a system with ‘human-in-the-loop’ that provides good localization relevant to visually impaired users and, most importantly, that understands how people observe and recognize particular features of their environment,” said Bellotto.
The device will get data from its camera as an input and will then try to identify the type of room as the user move around.
By using artificial intelligence in the system, it will be more able to adapt the individual user experiences as s the machine ‘learns’ from its landscape and from the human interaction.
“If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious,” Bellotto further added.