The world is calling for regulation around AI - Europe is first in picking up the challenge

image

Author: Matilde Castleberry

As anticipated in our previous article, AI, along with emerging knowledge technologies, poses serious challenges to our contemporary societies and generates democracy related dilemmas. Today we will be analysing what promises to become the world’s first rules set on AI, proposed by the European Commission. The European Union proposed its first regulatory framework for AI in 2021, recognising the benefits and the threats of this technology, classifying it based on different risk levels and consequent amount of regulation needed.

This classification, part of the EU’s digital strategy, also underlined the threats to fundamental rights and democratic processes. From opportunities for businesses and public services, to transparency challenges and threats to fundamental rights and democracy, AI is pervasive in all aspects of our lives and must be regulated.

The KT4D project focuses on Democracy and Civic participation related consequences of the emergence of new technologies together with, and on behalf of Europe. The EU-funded project contributes to the goal of identifying and proposing collaborative tools and solutions for these new challenges. We have published a Factsheet on our Zenodo.com community that anticipates the research insights that will be covered by the upcoming Social Risk Toolkit addressing the challenges and benefits arising from emerging technologies.

Analysing the AI Act

The AI Act is structured in a proportional and differentiated way. The European Parliament recognised the different purposes and the consequent implications of the implementation of AI in our lives. For this reason, it has divided technologies and related risks into different categories. While our recent policy brief advocates for also taking culture into account when analysing risk the current approach is as follows.

AI systems posing unacceptable risks can be identified as cognitive behavioural manipulation, social scoring, and real-time remote biometric identification. These systems will be banned,although exceptions for law enforcement will exist, with stringent limitations on the use of real-time remote biometric identification and approval required for post-identification systems in prosecuting serious crimes.

AI systems posing risks to safety or fundamental rights will be deemed high risk and categorised into two groups: those integrated into products governed by EU product safety legislation, such as toys, aviation, automobiles, medical devices, and elevators; and those designated for specific domains, including critical infrastructure management, education, employment, service access, law enforcement, migration control, and legal interpretation, requiring registration in an EU database. All high-risk AI systems must undergo an assessment before market entry and continuously throughout their lifecycle.

Limited risk AI systems must adhere to basic transparency standards to enable informed decision-making by users. Users should be informed of AI interaction and given the choice to continue using applications after interaction. This encompasses AI systems involved in generating or manipulating image, audio, or video content, such as deep fakes.

Minimal or No risk systems, like most of the AI systems currently in use in Europe, must meet transparency requirements, preventing illegal content generation, and summarising copyrighted data used in training. High-impact general-purpose AI models, such as GPT-4, require thorough evaluations, with serious incidents reported to the European Commission to mitigate systemic risks. 

The AI act’s focus on democracy

AI has the potential to strengthen democracy by using data-driven scrutiny to combat disinformation and cyber attacks, while ensuring access to quality information. It can also promote diversity and openness, such as by reducing bias in hiring decisions through analytical data. To exploit and safeguard the beneficial potential of AI, the European Commission has positioned AI systems representing threats to democracy, under the ‘high risks’ category along with AI systems employed to sway election results and voter conduct.

The Act employs a risk-based approach, requiring AI systems to meet various requirements before being sold or deployed, particularly in sensitive areas like critical infrastructure and access to education and employment opportunities. The Act also includes provisions to regulate generative AI, exemplified by ChatGPT, recognising the importance of open-source contributions while ensuring compliance with documentation standards. However, challenges remain in accommodating open-source developers within the regulatory framework.

The Act, agreed upon in December with member states, was finally approved by the Parliament on March 13, 2024, and is expected to be finally adopted in April 2024. However, in the meantime, other countries around the world have started developing their own sets of regulatory measures.  

The EU’s AI Act next steps and echoes in the world

In order to lead, the EU has recently established an AI Office, the centre for AI expertise which will be leading and implementing the Act and fostering a single AI governance system. The AI office will collaborate both with member states and with communities of experts and will be carrying out specific tasks related to: Supporting the AI Act and enforcing general-purpose AI rules, Strengthening the development and use of trustworthy AI, Fostering international cooperation, Cooperation with institutions, experts and stakeholders. Moreover, together with the GenAI4EU initiative, it will help develop new use cases and applications in Europe's 14 industrial ecosystems, as well as the public sector. But how will we make sure that regulations are always up to date?

Regulation will always be obsolete, law is always chasing society in an attempt to match the present. For this reason, especially when formulating regulation around a fast-changing technology like AI, it is important to make it ‘future-proof’ as the European Union did. This characteristic will allow the act to be flexible, ensuring rules adaptation and ongoing quality risk management.

While we wait for the Act to officially become law, the EU's approach has influenced global AI regulation efforts, with other countries like the US and Canada considering similar risk-based approaches while supporting open-source collaboration. As AI technology and policy evolve worldwide, a shared emphasis on responsible AI development will remain essential.

Including Culture into Risk assessment

KT4D in its most recent Policy Brief stresses the importance of including culture as one of the spheres to consider when carrying out risk-assessment. As we can infer from what we discussed in the first episode of the series, culture and society are highly impacted by changes in technology therefore it must be included in policy making to carry out more equitable policies. In our next episode - out next friday - we will be talking exactly about the relationship of Culture with technological change, don’t miss it!