How to manage artificial intelligence risk and security: Focus on five priorities

Posted by
Check your BMI

In most organizations, artificial intelligence models are “black boxes,” where only data scientists understand what exactly AI does. That can create significant risk for organizations.

 

Copyright: siliconangle.com – “How to manage artificial intelligence risk and security: Focus on five priorities”


 

toonsbymoonlight
Large, sensitive datasets are often used to train AI models, creating privacy and data breach risks. The use of AI increases an organization’s threat vectors and broadens its attack surface. AI further creates new opportunities for benign mistakes that adversely affect model and business outcomes.

Risks that are not understood cannot be mitigated. A recent Gartner survey of chief information security officers reveals that most organizations have not considered the new security and business risks posed by AI or the new controls they must institute to mitigate those risks. AI demands new types of risk and security management measures and a framework for mitigation.
Here are the top five priorities that security and risk leaders should focus on to effectively manage AI risk and security within their organizations:

1. Capture the extent of AI exposure

Machine learning models are opaque to most users, and unlike normal software systems, their inner workings are often opaque to even the most skilled experts. Data scientists and model developers generally understand what their machine learning models are trying to do, but they cannot always decipher the internal structure or the algorithmic means by which the models process data.

This lack of understandability severely limits an organization’s ability to manage AI risk. The first step in AI risk management is to inventory all AI models used in the organization, whether they are a component of third-party software, developed in-house or accessed via software-as-a-service applications. This should include identifying interdependencies among various models. Then rank the models based on operational impact, with the idea that risk management controls can be applied over time based on the priorities identified.

Once models are inventoried, the next step is to make them as explainable or interpretable as possible. “Explainability” means the ability to produce details, reasons or interpretations that clarify a model’s functioning for a specific audience. This will give risk and security managers context to manage and mitigate business, social, liability and security risks posed by model outcomes.

2. Drive awareness through an AI risk education campaign

Staff awareness is a critical component of AI risk management. First, get all participants, including the CISO, the chief privacy officer, the chief data officer and the legal and compliance officers, on board, and recalibrate their mindset on AI. They should understand that AI is not “like any other app” – it poses unique risks and requires specific controls to mitigate such risks. Then, go to the business stakeholders to expand awareness of the AI risks that you need to manage.

Together with these stakeholders, identify the best way to build AI knowledge across teams and over time. For example, see if you can add a course on fundamental AI concepts to the enterprise’s learning management system. Collaborate with application and data security counterparts to help foster AI knowledge among all organizational constituents.[…]

Read more: www.siliconangle.com

Der Beitrag How to manage artificial intelligence risk and security: Focus on five priorities erschien zuerst auf SwissCognitive, World-Leading AI Network.

Source: SwissCognitive