How AI Can Increase The Quality of Life for People with Visual Impairments

Posted by
Check your BMI

AI-powered solutions are changing the lives of people with visual impairments by providing innovative tools for interaction and accessibility.

 

SwissCognitive Guest Blogger: Artem Pochechuev, Head of Data Science at Cortlex – “How AI Can Increase The Quality of Life for People with Visual Impairments”


 

toonsbymoonlight
Quite often when somebody is trying to explain to you how people with visual impairments feel the world and interact with it, they ask you just to close your eyes. But it is not fully true. People with different levels of blindness may see light or some blurry shapes. Moreover, they have absolutely another way of exploring the environment around us. They do it via touch and sound. While sighted people get nearly 90% of all the information with the help of their eyes, people with visual disorders have to rely on alternative ways.

The era of digitalization has already greatly changed the lives of communities with visual impairments by introducing new solutions for facilitating many tasks and increasing their security.

And now, when we have such a powerful tool as AI, we need to use all the available possibilities to help people in their everyday interactions with the world around them.

Different types of AI-powered solutions for people with visual disorders

AI-powered solutions that are built for users with visual disorders can’t be considered the latest innovation. Various tools like screen readers that can transform text into speech are already rather widely adopted. But let’s admit that today, we are living in the Visual Age, which means that videos and images are continuously gaining importance in our daily routine. The internet and a lot of software apps (like social media platforms) that previously used to be text-based are becoming less accessible for people with visual disorders. As a result, there has appeared a pressing necessity to introduce more advanced solutions that will be able not only to voice text information but also recognize, interpret, and explain everything that is displayed on the screen, including visual content.

But here, it is vital to understand that everything is not as straightforward as it may seem. If a screen reader starts to describe all page elements in a fixed order, the whole process may become too chaotic and everything may sound too confusing for users. Why can it happen so? Web pages may have a lot of unstructured visual elements, links, and other types of content that after being described may prevent users from catching the key idea. And that’s one of the issues that developers need to address today. They need to make it possible for such screen readers to understand what part of the offered content is worth being voiced and what is not important to a user.

Apps with screen reading tools can help people with different types of visual disorders study and work practically on the same terms as those who do not have such impairments. It’s really inspiring to hear the stories of graduates who got their bachelor’s or master’s degrees in various fields like IT despite the fact that they are fully blind. Moreover, thanks to screen readers they can become valuable members of software development teams as with such solutions, only their development skills matter and not their disabilities.

Though the internet and computers in general play a very important role in the modern world, they are not the only things that a person needs to interact with. AI can also help to facilitate communication with the physical world. Already today, there are apps that rely on image recognition technology to identify objects. A user just needs to point a smartphone’s camera at an object that they want to detect and the app can voice what it sees. Applications of this type use deep learning to determine objects. They can break down the image into separate items to figure out their likelihood with the objects in their databases.

Seeing AI by Microsoft is a great example of a solution developed for people with full or partial blindness. It is an ongoing research project that is being conducted with the help of people who have visual disorders. The main goal of the project is to build a comprehensive app that will be able to narrate the visual world by providing descriptions of texts, objects, and people.

Today, the app is enriched with a row of tools intended for dealing with different everyday tasks. Here are a couple of examples:

  • It can read text after a user points a camera at it.
  • It can scan barcodes, voice names of products and package information.
  • It can save the faces of people, estimate their gender and age, and even read emotions.
  • It can recognize currency notes.
  • It can voice a general description of the entire scene in a photo or picture.
  • It can define colors.
  • It can provide indoor navigation.

It’s interesting to note that some companies and startups are even one step ahead of their rivals and are working on the development of AI-powered devices that can be attached to any pair of glasses and can offer audio descriptions of everything that surrounds a user. For example, such a device can recognize faces, read documents and road signs, and warn about various existing threats or objects that can potentially pose threats to people with visual impairments like crossroads, unregulated pedestrian crossings, curbs, road damages, etc.

AI-powered Envision Glasses can articulate visual information into oral speech in more than 60 languages. The glasses are equipped with an 8-megapixel camera and can recognize objects and read texts for users. The weight of these glasses is less than 50 grams while the battery allows people to wear them for around 5-6 hours on one charge. They have Bluetooth and Wi-Fi connectivity and can be operated either via a mobile app or independently.

Such devices can be also connected with cutting-edge navigation apps developed for people with visual disorders. These applications should demonstrate the highest level of precision of all the provided instructions. They should rely only on real-time data on traffic conditions, the state of infrastructure, etc., and be able to interpret them in order to adjust their instructions. Such applications should accurately define the location of a user, recognize voice commands from a person, and provide audio instructions. The key challenge for developers here is to ensure the desired level of accuracy of this navigation and the capacity of such apps to rely on continuously updated real-time data in order to minimize the risks for users.

Among such solutions, we can name a GPS-powered app BlindSquare. It has self-voicing functionality that helps users travel safely both indoors and outdoors. The application can determine the user’s location and further get data about the surroundings from external navigation services. To hear the current address and information about various places of interest located nearby, a person needs to shake a device. When a user is moving, the app can track the destination and inform a person of the direction and distance. BlindSquare relies on Acapela voices and currently, the app can speak in many different languages.

While outdoor navigation can greatly increase the socialization and mobility of people with visual impairments, AI-powered voice assistants can fully revolutionize the daily routine of users at home, at work, and really everywhere. Amazon’s Alexa or Apple’s Siri are the most well-known examples of such solutions but they are far not the only ones. A lot of companies and startups are working on building apps that can be compatible with smart speakers and various models of mobile devices.

While for people with no visual disorders, such solutions can become good helpers, these voice assistants can become absolutely irreplaceable tools for people who have blindness of different types.

When you need to check weather forecasts, you can just have a look at the data provided by your favorite app. When you want to find out the results of a football match or a song contest, you can open a website. To get such information some decades ago, people with visual disorders needed to ask their relatives or friends for help. Now voice assistants can find and read this information for them.

Modern AI-powered solutions of this kind rely on such technologies as NLP (natural language processing) and are able to follow voice commands. The range of these commands can go much beyond just googling something or reading particular texts. Voice assistants can dial phone numbers and make calls, send messages, switch music on and off, take notes, set alarms, remind users of important meetings or events, etc.

Some of these apps are already connected to Google & Microsoft Calendars which greatly facilitates a lot of processes for people. AI-powered assistants can also analyze the preferences of their users in music, podcasts, and fields of knowledge in order to provide recommendations on different types of content that can be potentially interesting to them.

Today the companies behind such solutions are actively working on expanding their functionality in order to allow them to deal with as many tasks as possible.

Closing word

As we can see already now, AI-powered solutions that are available in the market can greatly help people with blindness. However, there is still a lot to be done in the aspect of mass adoption of such products.

But visual disorders are not the only type of impairments that can be addressed with AI technology. The possibilities of engineers in this sphere are practically limitless and in the majority of cases, some existing barriers and restrictions are just a question of time.

Want to learn more about the power of AI in making the lives of people with various impairments easier? Just stay with us! And in our next article, we will talk more about the capabilities of artificial intelligence.


About the Author:

Artem PochechuevIn his current position, Artem Pochechuev leads a team of talented engineers. Oversees the development and implementation of data-driven solutions for Cortlex’s customers. He is passionate about using the latest technologies and techniques in data science to deliver innovative solutions that drive business value. Outside of work, Artem enjoys cooking, ice-skating, playing piano, and spending time with his family.

Der Beitrag How AI Can Increase The Quality of Life for People with Visual Impairments erschien zuerst auf SwissCognitive, World-Leading AI Network.