During the 2012/13 school year, we produced a piece of software that, in conjunction with a tablet and depth camera, allows a blind person to move safely and freely around both familiar and unfamiliar environments.

It uses auditory feedback to give the user a full sense of what is in front of her or him; tactile feedback is still in the works. A computer analyzes an image of the user’s surroundings and alerts the user to specific points of interest or hazards by providing auditory feedback. We took this project to the Intel International Science and Engineering Fair in Phoenix, Arizona, and placed third in computer science.

Our idea started with the question: How can we make computer experiences more accessible? We concluded that the area of natural human-computer interaction has potential to be explored further – especially to aid those with disabilities.

Almost immediately our research led us to several projects that were attempting to use electrostatic friction to simulate different textures on a touch screen. At some point after finding this research, somebody on the team asked the question: Wouldn’t that be useful for blind people? One of the original designs for our project was to use these electrostatic screens in conjunction with a depth camera (worn around the user’s neck) to relay information about the environment to a visually impaired person. By dragging a finger across the screen, the user would be able to gain a basic idea of the environment in front of them; closer objects would have more friction and farther objects would have less. We hoped to locate objects like doors, stairs and drop-offs, common hazards that cannot be easily detected by traditional methods.

Unfortunately, the electrostatic hardware behind our original idea was still in development, and any device with the sensitivity and range of functions necessary for our design would be the size of a large office printer – not something a blind person could carry around with them. However, the software was the piece of the puzzle we were inventing, and we felt that the base concept would still apply. Our final design was essentially the same as the initial one, with the exception that we used a standard tablet and had it play back higher and lower pitches for the area of the image the user was touching. The tablet would also say the words ‘stairs’ or ‘warning’ for environmental hazards it detected. In addition to our original plan, we also experimented with several other functions and systems, such as feedback via vibrating motors on the user’s fingertips instead of sound, and having the computer read street signs and posters out loud.

Once we finished the code, we tested it on ourselves and others at school by blindfolding our subjects and having them walk through a room with unfamiliar furniture and walls. These qualitative tests proved very promising. Our software is not yet available for public download, as we feel that there are too many corner cases that could be dangerous if people who are actually blind put their trust in the software. We are currently looking for input from blind people to help us further streamline the system, and are also looking for ways to design a device to accompany the software.

The primary advantage of our system over other solutions out there was that all the ‘magic’ happened in the software. The algorithms we created should work with any touchscreen device and any form of depth camera. It was also of utmost importance for us to make the software run as efficiently as possible so that it can work on lower-powered machines. In the end, we were able to optimize the system to allow it to process environmental information and output feedback in real time. This means that it should be incredibly easy to distribute to those in need.

In the end, it comes down to what will actually help people; all the technical aspects of the project are merely a means to accomplish this goal. And who knows, maybe in the near future ‘Seeing Eye Dogs’ will be replaced with ‘Seeing Eye Pads’.