AI ‘can be manipulated’ as expert reveals sinister scheme that lets chatbots defraud you with a simple conversation

Posted by
Check your BMI

ARTIFICIAL intelligence chatbots could be “manipulated” by cyber-criminals to defraud you.

That’s the stark warning from a leading security expert who says that you should be very cautious when speaking to chatbots.

Getty
toonsbymoonlight

AI is extremely powerful and can change your life for the better – but it has big risks[/caption]

Specifically, avoid handing over any personal information to online chatbots if you can help it.

Chatbots like OpenAI‘s ChatGPT, Google Gemini, and Microsoft‘s Copilot are used by tens of millions of people around the world.

And there are dozens of other alternatives, each capable of improving your life through humanlike conversation.

But cybersecurity Simon Newman expert told The U.S. Sun that chatbots also pose a hidden danger.

“The technology used in chat bots is improving rapidly,” said Simon, an International Cyber Expo Advisory Council Member and the CEO of the Cyber Resilience Centre for London.

“But as we have seen, they can sometimes be manipulated to give false information.

“And they can often be very convincing in the answers they give!”

TECH PAUSE

For a start, artificial intelligence chatbots might be confusing for people who aren’t tech-savvy.

It’s easy to forget – even if you’re a computer whiz – that you’re talking to a robot.

And that can lead to difficult situations, Simon told us.

“Many companies, including most banks, are replacing human contact centres with online chat bots that have the potential to improve the customer experience while being a big money saver,” Simon explained.

“But, these bots lack emotional intelligence which means they can answer in ways that may be insensitive and sometimes rude.

“This is a particular challenge for people suffering from mental ill-health, let alone the older generation who are used to speaking to a person on the other end of a phone line.”

What is ChatGPT?

ChatGPT is a new artificial intelligence tool

ChatGPT, which was launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.

It’s part of a new generation of AI systems.

ChatGPT is a language model that can produce text.

It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media.

ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions

GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.

If you prompt it, for example ask it to “write a short poem about flowers,” it will create a chunk of text based on that request.

ChatGPT can also hold conversations and even learn from things you’ve said.

It can handle very complicated prompts and is even being used by businesses to help with work.

But note that it might not always tell you the truth.

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.

For instance, chatbots have already “mastered deception”.

And they can even learn to “cheat us” even if they haven’t been asked to.

The U.S. Sun worked with cyber-experts to reveal “subtle signs of AI manipulation” in conversations that you should look for.

BAD CHAT

They are not immune to being hacked by cyber-criminals.

Simon NewmanInternational Cyber Expo Advisory Council

But the big danger isn’t a chatbot misspeaking – it’s when cyber-criminals can compromise the AI to target you.

A criminal might be able to break into the chatbot itself, or trick you into downloading a hacked AI that is set up for malicious purposes.

And this chatbot can then work to extract your personal info for the criminal’s gain.

“As with any online service, it’s important for people to take care about what information they provide to a chatbot,” Simon warned.

AI ROMANCE SCAMS – BEWARE!

Watch out for criminals using AI chatbots to hoodwink you…

The U.S. Sun recently revealed the dangers of AI romance scam bots – here’s what you need to know:

AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversation and can be difficult to spot.

However, there are some warning signs that can help you identify them.

For example, if the chatbot responds too quickly and with generic answers, it’s likely not a real person.

Another clue is if the chatbot tries to move the conversation off the dating platform and onto a different app or website.

Additionally, if the chatbot asks for personal information or money, it’s definitely a scam.

It’s important to stay vigilant and use caution when interacting with strangers online, especially when it comes to matters of the heart.

If something seems too good to be true, it probably is.

Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.

By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.

“They are not immune to being hacked by cyber-criminals.

“And potentially can be programmed to encourage users to share sensitive personal information, which can then be used to commit fraud.”

The U.S. Sun recently revealed the things you must never say to AI chatbots.

And be very wary about believing what chatbots tell you.

A security expert recently told us that we need to adopt a “new way of life” where we double- and even triple-check everything we see online.