Security Of AI Products: How To Address The Existing Risks

Posted by
Check your BMI

Today there are a lot of talks about how Artificial Intelligence and other related technologies can be applied to address cybersecurity risks. But amid all these discussions, quite often people forget that the security of AI solutions themselves also requires our attention. And that’s exactly what we’d like to cover in this article.

 

SwissCognitive Guest Blogger: Artem Pochechuev, Head of Data and AI at Sigli – “Security Of AI Products: How To Address The Existing Risks”


 

SwissCognitive_Logo_RGB
toonsbymoonlight
AI-related risks and threats

Once appeared Generative AI tools had a great “Wow” effect for a wide audience. However, upon closer inspection, users started noticing that the quality of content produced by LLMs was not always as high as they expected. The content is often biased and stereotyped and here we do not even mention that it can have a lot of factual mistakes. And while in some cases, it is just an accidental misunderstanding, in others, it can be a well-planned operation.

Let’s have a closer look at the potential threats related to the use of AI tools.

  • Data training. What we get from AI is a reflection of what we have fed AI models with. The “Garbage in, garbage out” principle works perfectly well in this situation. If the team behind a particular LLM hasn’t trained this model on the data that should be used for the set task, AI won’t be able to provide an adequate response.

Also, there can be cases of so-called data poisoning attacks. They presuppose tampering with the data that an AI model is trained on in order to produce some specific outcomes that can be desired by bad players (for example, biased or incorrect info about some people or events).

  • Prompt injections. This type of attack is quite similar to those related to the manipulations with data. But in this case, attackers work with prompts. They create inputs that can make AI models behave in an unintended way. As a result, they can push AI to reveal some sensitive data or produce potentially dangerous or offensive content.
  • Insecure data storage. This point is relevant not only to AI tools but in general to the data that we work with today. The problem is that quite often the storage, as well as the ways of data processing, are not secure enough. Due to the lack of such basic things as encryption and proper access controls, sensitive data can become easy prey for hackers.
  • AI misuse. Generative AI can be a very dangerous tool in the hands of malicious actors. Misleading information and deepfakes created with the help of AI can look rather realistic which greatly contributes to the spread of disinformation. Already today there are a lot of cases of using AI to discredit political opponents or undermine one’s reputation.

But one of the most alarming issues here is that the world still doesn’t have reliable AI regulations. There are no strict rules that would be used to bring to justice for the misuse of AI and infringement of intellectual property rights.

One of the latest controversial cases (but not the only one) was the situation with Adobe Stock. The stock photo service was selling AI-generated images “inspired” by the works of the renowned American landscape photographer and environmentalist Ansel Adams. According to the Ansel Adams estate, it was already not the first case when references to Adams’s work appeared in AI-generated listings.

By the time of writing this article, the mentioned AI-generated Ansel Adams-style images had already been deleted from Adobe Stock. However, in a global context, such solutions can’t directly address the problem itself.

How can we deal with the existing security issues?

It’s a well-known fact that AI/ML tools are good at detecting patterns and further defining deviations from these patterns that can be signs of suspicious behaviors, fraudulent activities, possible data leaks, etc.

That’s why such technologies are widely applied for anomaly detection, behavioral analytics, as well as real-time monitoring in various types of solutions. Nevertheless, these tools can cope only with point tasks. As a result, 100% protection is, unfortunately, not guaranteed.

It is also impossible to create a tool that will fully protect people from potential risks associated with the use of AI. For example, we can only make it more difficult for users to get undesirable or potentially harmful information during their interactions with an LLM. But we can’t fully eliminate the risks of accessing such info.

AI hallucinations: Can we fully rely on what AI tells us?

Have you ever noticed that during your communication with ChatGPT (or any other GenAI tool), it offered you something absolutely crazy that had nothing in common with reality? For example, it could mention something that didn’t exist or something that was irrelevant. This notion is known as AI hallucinations.

It happens so because AI itself doesn’t understand what it needs to tell you. It lacks reasoning because it is trained only to predict the next word/character that could be valuable in this or that case.

But sometimes AI can go the wrong way. It can occur because of such factors as:

  • Low-quality, insufficient, or obsolete data;
  • Unclear prompts (especially when you use slang expressions or idioms);
  • Adversarial attacks (these are prompts that are designed to purposely confuse AI).

Unfortunately, AI hallucinations are a significant ethical concern. First of all, they can seriously mislead people. Let’s imagine a very simple situation where a student uses ChatGPT for learning and needs to understand a new topic. What if AI starts hallucinating?

Secondly,  it can cause reputational harm to some companies or individuals. There are already known cases when AI “accused” known people of bribery or harassment while in reality, they had nothing in common with those cases.

Moreover, AI hallucinations can represent safety risks, especially when it comes to such sensitive areas as security or healthcare. Today chatbots that can analyze your symptoms and guess what health problems you have are gaining popularity. Nevertheless, with the risk of incorrect diagnoses, they can bring more harm than benefits. With erroneously chosen operational commands in case of dangerous situations posing risks to health and life, the consequences are unpredictable.

Is it possible to address such issues? Yes, but not with 100% accuracy.

Here are the most important factors that can help to minimize the risks of AI hallucinations:

  • High-quality training data;
  • The use of data templates;
  • Datasets restrictions;
  • Specific prompts;
  • Human fact-checking.

Actually, verification of what AI has offered you is one of the fundamental things to do, especially when you need to publish the generated content for a wide audience or if AI is applied in sensitive areas.

AI vs Humanity: Will we lose this battle?

While talking about the threats that are associated with the use of AI, it’s impossible not to mention one of the most serious concerns voiced by some people.

Will AI really take the world one day? Maybe governments and businesses should stop investing in this technology right now in order to avoid disastrous consequences in the future? Quick answers to both of these questions should be “No”. AI doesn’t want to rule the world (in reality, it can’t “want” at all). And the investments in its development definitely have more pros than cons.

There are three types of AI:

  • Narrow AI with a very limited range of abilities;
  • General AI which is on par with human capabilities;
  • Super AI which can surpass human intelligence.

LLMs, together with, for example, well-known image recognition systems or predictive maintenance models, are included in the first group of narrow AI solutions. And even this category is not explored well enough. And even the results of the work of narrow AI tools (not to mention general and super AI) are still not perfect. Today engineers know how neurons work and how to reproduce this process. Nevertheless, other processes and capacities of the intelligence are yet to be studied.

We should not humanize AI. AI itself can’t think and can’t make decisions. Unlike a human, it has no intentions. To let AI tools have intentions, first of all, it is still necessary to understand what it is, and why and how it appears.

AI can be compared with a Genie in a bottle or oil lamp that can’t survive without it. And AI can’t survive independently. Any LLM is sitting absolutely quietly till the moment you ask it to offer your ideas for lunch or a structure for your next article. AI tools can fulfill the set tasks that they were trained for. And nothing more than that.

Moreover, an AI solution is just a solution. Decisions are made by a human. And at the moment, we do not have any reliable predictions for the timelines when (and whether at all) AI will be able to make decisions on its own.  It’s worth mentioning that we can’t talk about 100% automation of all the processes which is also a serious barrier for independent AI functioning.

AI can generate thousands of ideas or write poems. But it is not able to create absolutely new genres of music or art, as a human can.

In other words, people should leave all their fears aside. AI is not going to rebel against our supremacy.

AI: Good or bad?

With all the AI-related concerns that we have discussed in this article, is it still worth relying on this technology? Definitely yes, regardless of all the issues (but with diligent attention and caution, of course).

In the previously published articles, we talked a lot about the value of AI for education and for expanding possibilities for people with disabilities. And it is just a small part of its use cases. AI has an enormous power to transform a lot of spheres and processes around us. But it can make our lives much better only with the right approach to its development and application.


About the Author:

Artem PochechuevIn his current position, Artem Pochechuev leads a team of talented engineers. Oversees the development and implementation of data-driven solutions for Sigli’s customers. He is passionate about using the latest technologies and techniques in data science to deliver innovative solutions that drive business value. Outside of work, Artem enjoys cooking, ice-skating, playing piano, and spending time with his family.

Der Beitrag Security Of AI Products: How To Address The Existing Risks erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.