With the rapid speed of development in artificial intelligence (AI) technologies, we have noticed several tech giants and developers come up with new tools, applications, and accessories to support the blind community. Recently, we observed Apple and Google develop dedicated systems to support blind people. Now, an India-based developer has developed an AI-based backpack that helps blind people move around smoothly without any help from another person.
The AI backpack is designed by an Indian computer vision researcher named Jagdish Mahendran. When he saw the daily navigation struggles of his friend who is blind, he began working on the backpack back in 2013. After that, In 2020, Mahendran won an Intel AI competition for his thought.
Originally, Mahendran searched the existing technologies that were developed to support blind people navigation. Later, he came up with the backpack approach that uses cameras and AI sensors to examine an environment and give immediate feedback to the users.
How Does It Work?
It works is pretty simple. The user, with the backpack, has to bring a Bluetooth-enabled headset, a vest with the essential sensors, and a fanny pack to collect the battery and other accessories. The vest includes hidden Intel sensors and a front-facing camera to take the surroundings.
The system is voice-based. So, when the user begins it by saying “Start”, it starts examining the surroundings and provides immediate feedback to the user via the Bluetooth headset. Mahendran says that he originally worked to shorten the delay time between the processing and the feedback delivery as delays can be often risky for blind people. So, he used the Intel-powered Luxonis OAK-D unit that prepares camera data immediately to provide zero-lag feedback.
The developer also said that he wanted to keep the conversation simple. As a result, he designed the system to take quick commands such as Left, Right, Describe, and Locate. The directional commands provide information about any difficulty to the user’s left, right, or top. On the other hand, commands such as Locate will pull up a list of common locations such as the office or home to give the directions.
Now, although the system is audio-based, it takes the images with dedicated labels for different objects.
Going into the future, Mahendran intends to develop his technology and continue working with the community. He is also planning to make all his data easily available for other researchers to use for their innovative projects.
Also read: Google Fit now Supports Respiratory and Heart Rate Monitoring