When Jagadish Mahendran heard about his friend’s daily challenges navigating as a blind person, he immediately thought of his artificial intelligence work.
“For years I had been teaching robots to see things,” he said. Mahendran, a computer vision researcher at the University of Georgia’s Institute for Artificial Intelligence, found it ironic that he had helped develop machines — including a shopping robot that could “see” stocked shelves and a kitchen robot — but nothing for people with low or no vision.
After exploring existing tech for blind and low vision people like camera-enabled canes or GPS-connected smartphone apps, he came up with a backpack-based AI design that uses cameras to provide instantaneous alerts.
Mahendran focused on latency. Delays can be dangerous when, for example, a car quickly crosses an intersection. He said he significantly shortened lag time.
“[The camera data] is processed right away as it’s captured,” he said.
That data makes it to a Bluetooth-enabled earphone that alerts the user of any obstructions or route changes. Along with a backpack, the user also has to wear a vest and fanny pack, which houses the AI equipment, sensors, camera, and GPS.
He described the system, which has a battery life of eight hours, as “simple, wearable, and unobtrusive.”
Currently, the vest includes hidden Intel sensors and a front-facing camera, while the backpack and fanny pack house a small computing unit and power source. The Luxonis OAK-D unit, powered by Intel technology, located in the vest and fanny pack, is an AI device that processes camera data almost instantly to interpret the world around the user.
People interact with it through voice commands, such as “start,” which activates the system. “Describe” collects information about objects within camera view, while “locate” pulls up saved locations from the GPS system, like an office or home address.
Mahendran said he wanted to “keep communication simple,” so the user wouldn’t be overwhelmed with a constant barrage of audio about the surroundings. Instead the machine reads short prompts, like “Left,” when there’s something to the user’s left, or “Top, Front,” to describe a tree branch in the way.
He also created an online community for his technology, called MIRA. Blind volunteers helped him create the interface for MIRA and the backpack system.
The system is audio based. But images taken from its camera , which label objects such as pedestrians, buildings, stop signs, and other cars while driving. The developers can see what the computer in the backpack is processing, as seen below.
Some of Mahendran’s other AI work has involved machine vision for autonomous vehicles. “But you can’t exactly port self-driving cars to this problem,” he said. “People are usually on the sidewalk, while cars are on the road.”
It’s different enough that he and his team had to re-engineer existing autonomous vehicle open data for pedestrian situations.
He’s planning on keeping the collaborative energy going and making all of his data freely available to other researchers. He also submitted a research paper on the project that’s awaiting review.
Hema Chamraj, Intel’s director of Ai4Good, said projects like this will become more prevalent with more access to low-cost, but powerful, computing tools. “It’s amazing how much imagination is out there,” she said, musing about other potential projects that could combine machine learning with medical assistance.
Mahendran’s AI backpack isn’t for sale, but he says he is going to start a GoFundMe to equip any blind pedestrians who want to use the system.
As for Mahendran’s friend who sparked the project, he’s shipping her a unit in a few weeks to get her feedback from real-life experiences.