Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
The first of its kind, the proposal focuses on a human-centric approach to AI, with draft regulations aimed specifically at the development and use of AI within the European Union. While the guidance has been welcomed, it will have implications for organizations around the world who now must start considering compliance which according to McKinsey & Co in a recent survey few are ready for.
The proposed regulations follow a risk-based approach, classifying AI systems into three risk categories: unacceptable, high-risk, and limited and minimal-risk. Limited and minimal-risk AI systems include many of the AI applications currently used, including AI chatbots and AI-enabled video games, spam filters or inventory management. The proposal prohibits the following AI systems, which are considered a threat to the safety, livelihoods and rights of people: (a) systems or applications that manipulate human behaviour to circumvent users’ free will; and (b) systems that allow social scoring by governments. The high-risk category is perhaps the most complicated, and under the proposed regulation, the European Union would review and potentially update the list of systems included in this category on an annual basis. High-risk AI systems include many relating to governance, from critical infrastructure to education, border control and law enforcement, with the use of remote biometric identification to be subject to strict requirements. More directly related to business organisations include the evaluation of consumer creditworthiness, and assistance with recruiting or managing employees.
The draft regulation proposes different requirements for AI systems depending on their level of risk. Considering how organizations can prepare for the EU and future regulations, McKinsey & Co recommend “as a foundation, an organization will need a few critical components: a holistic strategy for prioritizing the role AI will play within the organization; clear reporting structures that allow for multiple checks of the AI system before it goes live; and finally—because many AI systems process sensitive personal data—robust data-privacy and cybersecurity risk-management protocols.”
The key, as with GDPR, is to be prepared. Under the proposed EU regulation, organisations will be required to conduct conformity assessments for all high-risk AI systems. However, as McKinsey & Co establish, “Rather than thinking about conformity assessments as a box to be checked for EU-type regulations, organizations should see them as enablers for effectively managing and mitigating the various risks associated with AI.”
—
Source: ‘What the draft European Union AI regulations mean for business’ by Misha Benjamin, Kevin Buehler, Rachel Dooley, and Peter Zipparo (McKinsey & Co, 10 August 2021)