How businesses should respond to the EU’s Artificial Intelligence Act

Posted by
Check your BMI
  • The EU’s Artificial Intelligence Act aims to regulate AI technologies.

  • Failure to plan could leave some businesses at risk of non-compliance.

  • Proving AI produces measurable value in your sector will be key.

Copyright: weforum.org – “How businesses should respond to the EU’s Artificial Intelligence Act”


toonsbymoonlight
The EU strikes again with a new set of regulations that take aim at the use of artificial intelligence (AI) to address the variety of risks associated with the societal adoption of AI. Like its sibling the General Data Protection Regulation (GDPR), the Artificial Intelligence Act (AIA) actually has teeth, with fines rising to €30 million, or 6% of global revenue. Is the answer to delete all your AI systems to minimize your risk to zero, or continue using AI for a competitive edge? Can you manage the recurring costs required to maintain compliance with the AIA even as the technology itself increases your bottomline?

Take the famous UK pub chain JD Wetherspoon, founded by British businessman Tim Martin in 1979 who has been an outspoken critic of the EU and a Brexit campaigner. Their response to personal identifiable information (PII) protection, legislated by the GDPR in 2017, was to delete their entire CRM database. Perhaps a drastic and knee-jerk reaction, but with the EU imposing GDPR-related fines of up to €20 million, or 4% of global revenue, my guess is JD Wetherspoon didn’t fancy taking on the risk, or investing a similar sum necessary to assure their GDPR compliance.

What is the aim of the Artificial Intelligence Act?

AI as a technology isn’t the problem; the human creators and business professionals require the intervention. The EU aims to create a globally recognized “factory” for producing safe, trusted, and ethical AI outcomes that respect existing laws on fundamental human rights and EU values. To enable the EU’s ethical AI mission, the AIA primarily aims to do two things:

Identify AI systems that present unacceptable risk (e.g., from social scoring by governments to toys using voice commands that encourage dangerous behaviour).
Apply strict obligations to AI systems that present high risk (e.g., from CV-sorting software for recruitment procedures, to credit scoring denying citizens the opportunity to obtain a loan).
Businesses that choose to build high-risk AI systems will be legally required to meet a defined list of criteria before they can be integrated into the single market. This means designing AI systems with transparency, explainability, and ethics embedded in their core, and having monitoring, guardrails, and governance capabilities already in place to ensure continued ethical compliance.

Many organizations have already implemented data protection and cybersecurity frameworks that provide a similar groundwork for AIA compliance. This makes meeting the requirements of the AIA not as challenging as starting compliance from scratch.

Source: SwissCognitive