Guiding European AI - a look at the AI Office
18, July, 2024
Welcome back to the KT4D series on the EU AI Act! In our last episode we discussed culture’s role in navigating technological change and today we’re back with an article focusing on the newly established AI Office - the foundational European AI governing system to reinforce safe and trustworthy technology.
Early consequences of the EU AI Act
In 2021 the European Commission developed a proposal of an AI Act, which in December 2023, the European Parliament and the Council of the EU, achieved a political consensus. Then in May 2024 the Council decided to approve the legal framework, however we will have to wait another 24 months from that date for the Act to become fully applicable.
The EU AI Act is the world's first comprehensive AI regulation. It harmonises rules across 27 member states to protect rights and promote innovation, positioning Europe as a leader in AI. The Act aims to reduce risks, combat discrimination, and enhance transparency. It supports start-ups and SMEs through initiatives like 'AI factories' and an AI innovation package, including €4 billion in investments by 2027.
The AI Act is a significant step in balancing AI innovation and societal concerns. Its impact on models like GPT depends on effective implementation, requiring ongoing collaboration among policymakers, researchers, and industry. To carry out its implementation there is the need to establish a body competent in both the technical and social aspects of AI which can ensure a safe and trustworthy AI.
The AI Office
Established within the European Commission as the AI expertise center, the AI Office forms the foundation of a unified European AI governance system, ensuring the safeguard of all aspects of our lives guaranteeing legal certainty for businesses across the 27 Member States.
The AI Office, as the implementing body of the act, is responsible for assisting governance bodies in Member States and the authority to evaluate AI models, request information, and apply sanctions. The Office also promotes an innovative AI ecosystem, ensuring Europe’s strategic and effective approach to AI globally.
To support informed decisions, the AI Office works with Member States and experts through specialized forums. This collaboration gathers insights from scientists, industry, think tanks, civil groups, and the open-source community, ensuring a comprehensive understanding of AI’s benefits and risks.
The AI Office will be headed by Lucilla Sioli - Director for AI and Digital Industry DG CONNECT- who was already responsible for coordination and policy development in the areas of AI and semiconductors. Considering her expertise and background, Lucilla Sioli seeks to deploy a safe and secure pathway forward for the Office.
Division of tasks and competences
The office will consist of five units which will be: The “Excellence in AI and Robotics” unit; The “Regulation and Compliance” unit; The “AI Safety” unit; The “AI Innovation and Policy Coordination” unit and; The “AI for Societal Good” unit. Moreover, it will have two advisor bodies, a Scientific Advisor and one for International Affairs.
The application and enforcement of the AI Act across Member States by the office, consists of developing evaluation tools, creating codes of practice, investigating violations, and providing implementation guidance. It promotes collaboration with public and private sectors, supporting AI testing environments, and enhancing EU competitiveness. Internationally, the AI Office advances the EU’s AI strategy by fostering global AI governance, supporting international agreements, and helping Member States align with these standards.
Side by side with the broader societal impacts of the EU AI Act, KT4D's policy brief highlights the need to consider cultural responses in AI development. It argues that current risk assessments overlook cultural implications, urging regulatory frameworks to better integrate cultural factors. This approach not only enriches our understanding of how communities adapt to technology but also ensures that new media align with evolving societal values. As technology evolves, protecting cultural dimensions from AI risks becomes crucial for maintaining democratic principles. Prioritizing cultural preservation alongside technological regulation can lead to more effective and equitable AI policies, benefiting from valuable cultural insights.
Read the full policy brief here.