The history of electronic hearing aids started more than a century ago, in 1898 practically just after the introduction of the telephone by Alexander Graham Bell in 1876. However, we can’t say that today they can fully solve the problems of people with hearing impairments.
SwissCognitive Guest Blogger: Artem Pochechuev, Head of Data Science at Cortlex – “How AI-powered tools can help people with hearing impairments”
According to the data provided by the WHO, by 2050, 1 in every 10 people, or in total over 700 million, will suffer from disabling hearing loss. And it is a very impressive figure. While some people lose hearing abilities in life span, others are born with hearing impairments and have no other choice but to use sign language for communication.
Hearing loss and deafness often influence such aspects of people’s lives as cognition, education, and employment, which can result in loneliness and full social isolation. The history of electronic hearing aids started more than a century ago, in 1898 practically just after the introduction of the telephone by Alexander Graham Bell in 1876. However, we can’t say that today they can fully solve the problems of people with hearing impairments. Even modern hearing aids are far from being perfect, they have a lot of limitations and may cause a lot of discomfort for their users.
Nevertheless, thanks to emerging technologies (and Artificial Intelligence occupies a leading position among them), we can change our approach to making our environment better for people with partial or full hearing loss.
In this article, we offer you to have a look at the new opportunities that AI opens to us in addressing the difficulties that people with hearing impairments face every day.
AI tools: Use cases and examples
To begin with, we should mention that today hearing aid manufacturers have already started demonstrating their interest in the capabilities of AI and are studying the possibility of making their devices more advanced with it. We should admit that such AI-powered hearing aids have a high chance of becoming game-changers.
Usually, digital hearing aids have so-called modes or programs, like a TV mode or a home program. These modes include static settings that correspond to different environments. However, such settings can be okay in very standardized conditions only and they won’t match up to unique circumstances. AI used in hearing systems works another way. It does not rely on strictly set modes but is able to adjust them in real-time based on the experiences of a user.
For example, when a user is visiting a crowded place with his or her spouse, AI-powered hearing aids that are enriched with noise-cancellation technologies can make their communication much more comfortable than it used to be without such tools. This device will be able to define the voice that a user hears most often and prioritize it over all others around while canceling other noises and sounds.
Cutting-edge hearing aids may also have sound amplification tools. In other words, when somebody is speaking too quietly or, for example, through a mask that obviously mutes sounds, AI-powered devices can detect such issues and amplify the sound in real time.
All this may seem too futuristic but such devices already exist. One of them is Widex Moment Sheer which was introduced in September 2022. Widex is focused on the quality of sound and utilizes AI and ML for designing hearing modes based on users’ typical environments.
But are AI-powered hearing aids the only options for helping people with hearing impairments? And what can be offered to people with full hearing loss? It’s high time to speak about solutions of other types.
A lot of people with hearing impairments have a well-developed skill of lip reading which means that they can understand what others say by interpreting the movements of their faces, lips, and tongues. This skill is highly valuable for them but the problem is that due to various kinds of disabilities, including muteness, they can’t use natural speech and use sign language to express their thoughts. It can become a barrier to synchronous communication when their partners do not know how to interpret all signs and gestures. Moreover, how is it possible to organize communication if a person is not good at lip reading or if it is not possible to interpret lip movements amid the ongoing conditions? Here’s when we should mention the possibility of building AI-powered sign language translators.
AI-powered Kinect Sign Language Translator by Microsoft is a solution that can convert signs into a spoken or written language and, vice versa, it can convert natural language into signs. To use such a tool, it is necessary to have a computer and a Kinect camera that will recognize gestures and provide their translation in real time. A similar process takes place when a hearing person is speaking. The system is “listening” to the speech and then transforms the words into signs.
But Microsoft is not the only company that is standing behind a sign language translator. A lot of startups are also working on similar solutions. In 2018, a Netherlands-based startup GnoSys introduced its app that is intended for translating sign language into speech and written text in real time. The application relies on neural networks and computer vision for recognizing sign language. Then with the help of smart algorithms, the recognized signals are transformed into speech.
The above-mentioned tools can help a lot in face-to-face communication which is highly important for the socialization of people and their possibility of getting a job. However, in our article, we also need to mention solutions that will revolutionize the online experience for people with deafness or other hearing impairments. How do these people usually watch films? With captions. But what can be done if there are no added captions? AI real-time captioning and transcription services can address this issue. And what is more important, such services will be of great use not only for movie lovers, they can be applied during lifestreaming, Zoom meetings, online lessons, etc. It’s will a great idea to rely on real-time captioning tools if you organize online events for a wide audience to ensure better inclusivity.
Verbit is one of the vendors that provides such services. What makes the offered tools extremely comfortable is the possibility to integrate them directly into streaming or conferencing platforms like YouTube, Twitch, or Zoom for seamless experiences. Such services are also popular among those who are watching videos in a non-native language or those who can’t use their headphones. However, the community of people with hearing loss can benefit from them most of all as for them the use of real-time captioning is not just a question of comfort but also a must.
It’s also crucial to mention that AI can also contribute to increasing safety for people with hearing loss which can affect their ability to react to emergencies. For example, let’s consider driving. Hearing impairments do not have a direct impact on driving skills but due to them, people can’t hear important sounds like sirens of emergency vehicles. These sirens always indicate a necessity to quickly react and make way for such vehicles as fire trucks or ambulances. When a driver doesn’t hear these sounds, he or she can’t take any measures which can lead to dangerous road situations. Engineer Jan Říha paid attention to these risks and developed a smart device dubbed PionEar. It relies on an audio classification algorithm and is able to analyze background noise and recognize the sounds of emergency vehicles. When such sounds are detected, a driver will be alerted with the help of a visual cue.
However, sometimes, to make our society more suitable for everyone, we do not need to create something exclusive for a particular community. Sometimes it will be enough to adapt something that was created for wide circles to the needs of some groups.
Virtual assistants with text-to-speech and speech-to-text capabilities like Siri are among such examples. If a girl with hearing impairments needs to make an appointment at a beauty salon, what options does she have? She can write a message. But what if administrators don’t have time to read messages? She can ask her friend or sister to do it. But it’s not always possible. Moreover, it will require additional time and effort. With a virtual assistant, everything will be easier. It will be enough to activate Siri by using a voice command or the Type to Siri mode and ask it to make a call to a beauty salon.
Yes, at the moment, this functionality still requires enhancements. But we are here to make the world a better place to live for everyone by means of technology. Right?
The range of AI use cases that we’ve considered in this blog post is a cool demonstration of how modern tech solutions can help us break down the barriers that used to exist (and are still existing) for people with any type of disability, not only hearing impairments. Thanks to modern AI-powered tools, people will have the possibility to be fully integrated into society. And what is probably even more important, our society will become more inclusive for them.
In the series of our blog posts, we will continue talking about the ways AI can help us to reach such goals. Stay tuned!
About the Author:
In his current position, Artem Pochechuev leads a team of talented engineers. Oversees the development and implementation of data-driven solutions for Cortlex’s customers. He is passionate about using the latest technologies and techniques in data science to deliver innovative solutions that drive business value. Outside of work, Artem enjoys cooking, ice-skating, playing piano, and spending time with his family.
Der Beitrag How AI-Powered Tools Can Help People With Hearing Impairments erschien zuerst auf SwissCognitive, World-Leading AI Network.