How Hackers Are Using AI to Subvert Facial Recognition Technology – Generative Adversarial Networks (GAN)

Posted by
Check your BMI

Artificial intelligence and facial recognition are increasingly being used in security measures. However, recent advancements in generative adversarial networks show that even facial recognition AI can be fooled. Without increased cybersecurity, AI in biometrics could become a major security threat.

SwissCognitive Guest Blogger: Zachary Amos – “How Hackers Are Using AI to Subvert Facial Recognition Technology – Generative Adversarial Networks (GAN)”


toonsbymoonlight

Most people have used facial recognition technology before but may not understand its potential repercussions. It seems secure and can make life easier, but there are downsides to the convenience. Technology has progressed to the point where hackers can use AI in facial recognition to commit biometric identity theft.

What Is a Generative Adversarial Network?

A generative adversarial network (GAN) uses two competing neural network models to create fake data that appears real. They continuously train through unsupervised learning until they improve. The generator model learns to generate new data, while the discriminator model attempts to determine which data is real or fake.

The process utilizes artificial intelligence (AI) and machine learning so both models learn from each other. The generator aims to improve throughout the process until it creates something the discriminator classifies as real.

How Do GANs Connect to Facial Recognition?

Many people have used facial recognition technology before because most phones unlock with it. In fact, around 73% of people feel comfortable sharing their biometric data partly because they’ve grown comfortable using it in everyday situations. It scans the face using over 30,000 infrared dots invisible to the naked eye and stores it for future comparison. The technology references the face map whenever it needs to confirm identity.

It’s widely accepted and used worldwide, even though some disagree with it. For example, law enforcement agencies in London plan on adding facial recognition cameras throughout the city despite pushback from residents. They raised questions about the security of storage, citing privacy concerns.

People may be correct to assume their privacy is at risk. Many think their faces are entirely unique, much like their fingerprints. While that’s true to an extent, GANs can trick facial recognition technology into thinking someone else is you.

How Are Hackers Tricking Facial Recognition With GANs?

AI can search through a database of faces to learn how to generate a very realistic image. It slightly morphs the real people in the photos until it creates something new. The result isn’t instantly recognizable as fake to humans, so it’s no surprise technology often can’t tell the difference.

Since GANs pit machine learning models against each other, they eventually make something capable of tricking other machine learning models. For example, it would attempt to generate a picture of a face until it looks realistic. The first few attempts may look odd and have facial features in the wrong place, but it can learn from past data and eventually create pictures that look like actual humans.

It might seem like a distant issue, but there are real-world repercussions. For example, hackers might take advantage of airports because they use facial recognition technology to identify individuals on the no-fly list. It seems secure, but there might be better options.

Although the National Institute of Standards and Technology claims its biometric tool can scan a passenger’s face once and be 99.5% or more accurate, the result can look correct when it isn’t. For example, researchers at a cybersecurity firm successfully attempted to trick an airport’s facial recognition system using GANs.

They repeatedly trained the neural networks to create a single fake face from a combination of real faces. The resulting image looked like one person to the naked eye and someone else to the system. Even though the individual was on the no-fly list, it recognized them as an entirely different person on the passport database and allowed them to board.

Why Does It Matter AI Is Subverting Facial Recognition?

Many assume biometric technology is secure because it’s tied to their appearance, but GANs have shown that is not the case. Essentially, hackers can commit identity theft and trick systems into thinking they’re someone else. Not only is this a significant security issue, but it impacts the individuals who have their biometric data misused.

Although it’s largely challenging to accomplish because it takes time and computational resources, it’s not impossible for determined cybercriminals. Considering many people are growing comfortable with facial recognition technology, biometric identity theft may become an issue.

AI Is Subverting Facial Recognition Technology

Hackers can use GANs to pose as someone else by tricking facial recognition technology. While some have privacy concerns over the widespread use and storage of biometric data, others don’t mind because it makes their life easier. No technology is inherently bad, but combining AI and biometrics might prove an issue without increased security measures.


About the Author:

Zachary Amos is the Features Editor at ReHack, where he writes about artificial intelligence, cybersecurity and other technology-related topics.

Der Beitrag How Hackers Are Using AI to Subvert Facial Recognition Technology – Generative Adversarial Networks (GAN) erschien zuerst auf SwissCognitive, World-Leading AI Network.