Celebrities are most at risk of AI ‘voice cloning,’ experts say – the tips to avoid falling for ‘vishing’ scams

Posted by
Check your BMI

CELEBRITIES are most at risk of falling victim to impersonation attempts thanks amid the surging popularity of AI voice cloning software.

However, lacking A-lister status isn’t enough to protect you.

FILE - Republican presidential candidate former President Donald Trump arrives to speak at a campaign rally, July 31, 2024, in Harrisburg, Pa. Trump's false attacks on Vice President Kamala Harris' Black identity have worried top Republicans that Trump may lose a campaign they still see as favorable for him. They say he should focus solely on the economy and immigration. (AP Photo/Alex Brandon, File)
A survey of 1,000 Americans ranked the public figures most at risk of AI voice cloning – and Gen Z respondents felt strongly about Donald Trump
toonsbymoonlight

Cybercriminals will find audio clips online and feed them into commercially available software to produce words and even full sentences in someone’s voice.

This process is known as voice cloning, and the result is commonly referred to as an audio deepfake.

The term “deepfake” was coined in 2017 to describe illicit images and videos that featured celebrities’ faces superimposed onto other bodies.

And it seems the rich and famous at in danger yet again.

A new study from Podcastle, an AI-powered podcasting platform, surveyed 1,000 Americans to glean their opinion on the celebrities most at-risk of voice cloning.

Respondents believed Arnold Schwarzenegger was most at risk due to having the “most straightforward voice to replicate.”

A whopping 86% of those surveyed believe the former California governor’s “distinctive and instantly recognizable accent” puts him at risk.

Schwarzenegger was followed by Donald Trump, Kim Kardashian, Sylvester Stallone, and Christopher Walken.

Almost one in four (23%) reported that Kardashian has a “consistent tone and pitch,” which makes her voice easy to replicate.

Meanwhile, 39% said Trump’s voice is easy to replicate due to its familiarity from frequent media appearances.

Gen Z respondents deemed Trump most at risk, their opinion likely shaped by record-high political turmoil in the media landscape.

Celebrities and politicians alike have surfaced as the most common victims of deepfakes on social media.

A rash of manipulated images on X, formerly Twitter, prompted the platform to temporarily ban searches for Taylor Swift‘s name in January.

And just last week, Elon Musk posted a deepfake video of presumptive Democratic presidential nominee Kamala Harris to X.

Getty

Former California governor Arnold Schwarzenegger came out on top, with survey participants crediting “distinctive and instantly recognizable accent”[/caption]

Deepfakes are no new phenomenon. The U.S. Department of Homeland Security acknowledged them in a 2019 report, claiming risk came not from the technology “but from people’s natural inclination to believe what they see.”

As a result, the report continued, “deepfakes and synthetic media do not need to be particularly advanced or believable in order to be
effective in spreading mis/disinformation.”

While respondents in the study weren’t asked to comment on the potential misuse of AI voice cloning technology, the firm’s leaders have expressed trepidation.

Podcastle CEO Artavazd Yeritsyan told The U.S. Sun that he was well aware of the use of AI voice cloning technology by malicious actors.

AI Brain Chip technology concept. 3D render
Audio deepfakes are commonly used to portray celebrities saying something they didn’t say, but members of the public are at risk for a different reason

“Any technology that you introduce, there will be always people that use it for bad things and people that use it for good things,” Yeritsyan said.

Users can record and edit audio without ever leaving the Podcastle platform. This includes using AI to generate words or phrases they didn’t record.

Yeritsyan says the purpose of the platform is to “automate” the production process rather than “replace a human being.”

The platform has checks in place to prevent the creation of audio deepfakes, too.

A user must record specific sentences to confirm that a real person is talking, as opposed to a cybercriminal feeding clips of someone else’s voice into the system.

“Then this content is safely and securely stored and encrypted so nobody else can ever access your voice,” Yeritsyan explained.

CEO Artavazd Yeritsyan believes voice cloning technology could play a role in accessibility and translation despite the dangers

While they are optimistic about potential future applications like text-to-speech accessibility functions, Podcastle’s top representatives are patently aware of the risks.

“I think the biggest threats are phishing reasons, where a criminal asks for bank account information using the voice of a relative or friend,” Yeritsyan said, describing a phenomenon known as voice phishing.

All a cybercriminal needs is a few seconds of audio – commonly found on social media – to create a deepfake, which is then weaponized to dupe unsuspecting victims into surrendering their personal information over the phone.

Cybersecurity experts refer to the phenomenon as “voice phishing” or “vishing.”

Getty

Cybercriminals rely on AI voice cloning technology to target people in “voice phishing” attacks, impersonating a victim’s friends or relatives to gain their trust[/caption]

A successful defense against this form of emergent cyberattack starts with understanding the signs of a scam.

Criminals often ask their victims to act urgently to correct fraudulent charges or confirm personal information. A forceful approach should raise red flags.

You must always exercise caution, as a caller ID may not be enough to verify identity.

Security experts recommend hanging up and dialing the organization or individual directly if you receive a call that you suspect may be fraudulent.

As a general tip, refrain from providing sensitive details like your passwords, credit card number or bank account information over the phone.

How are scammers finding my number?

Here Mackenzie Tatananni, science and technology reporter at The U.S. Sun, breaks down ways a scammer may get your information.

Scammers commonly get phone numbers from data breaches, which occur when a hacker accesses a private database – often those maintained by companies like service providers and employers.

This information may be shared and circulated online, including on the dark web, where there are forums dedicated to sharing leaked information.

Another common technique called wardialing employs an automated system that targets specific area codes.

A recorded message will instruct the listener to enter sensitive information, like a card number and PIN.

There is also a far more harrowing possibility: your phone number could be listed online without your knowledge.

Data brokers are hungry to buy and sell your information. These companies gather information from various public sources online, including social media and public records

Their primary goal is to build databases of people and use this information for tailored advertising and marketing.

Much of this information ends up on public record sites, which display information like your phone number, email, home address, and date of birth for anyone to see.

In the United States, these sites are legally required to remove your information if you request it.

Locate your profile and follow the opt-out instructions, but be warned – these sites do not make it easy and intend to frustrate you out of completing the deregistration process.

For simplicity’s sake, you can also use a tool to purge your information from the Internet.

Norton offers one such service. Called the Privacy Monitor Assistant, the tool finds info online and requests removal on your behalf.

It is also possible that your phone number may be linked to a social media account and publicly displayed on your profile – this happens quite frequently with Facebook.

Be sure to review your privacy settings and confirm this information is hidden away from prying eyes.

Podcastle’s representatives anticipate that voice cloning technology will be used increasingly to boost productivity and automate tedious processes.

However, they understand much of the responsibility to curb bad actors lies with them.

“We want to be at the stage where we just don’t give people the ability to use it for the bad reasons,” Yeritsyan explained.

“I think most products should be regulated so these kinds of things doesn’t happen.”