AI Governance & EU AI Act
AI generated image
Driving compliance with the EU AI Act while turning regulatory challenges into opportunities for innovation and growth
AI generated image
The rapid advancement of AI technologies has raised significant concerns about risks such as biases, ethical dilemmas, and threats to safety and privacy.
Recognizing the need for robust regulation, the EU AI Act establishes a comprehensive legal framework that categorizes AI systems by risk levels — from unacceptable to low — and sets clear requirements for high-risk applications, ensuring safety, transparency and alignment with fundamental human rights, while promoting innovation.
AI generated image
AI Compliance is no longer optional: by early 2025, the EU AI Act requires organizations using or developing AI to comply with a first set of obligations
With the EU AI Act set to enforce its first obligations by the beginning of 2025, collaborate with MULTIPLAI to navigate the EU AI Act with our unique offerings. We distill the act to its essentials, ensuring compliance and leveraging our thorough understanding to turn obligations into opportunities for AI-driven innovation and scaling.
The Act applies to all organizations along the AI value chain, who are established or located in the EU or where the output of an AI system is used in the EU.
The Act differentiates between the following roles, which are subject to different obligations:
Develop an AI system under its own name or trademark
Use an AI system under their authority
Make an AI system available across the EU market
Place an AI system on the EU market
Act on behalf of a provider outside the EU
The EU AI Act follows a risk-based approach, where AI systems are regulated according to their potential impact on individuals & society. The higher the risk, the stricter the regulatory requirements. The risk-based approach divides AI systems into four categories, making proper classification of each system crucial as obligations vary by category:
AI generated image
Risk classification of the EU AI Act regulation
AI Systems that pose an unacceptable risk and can cause significant harm
Manipulation of human behaviour, exploitation of vulnerabilities & social scoring etc.
SHUTDOWN & REPORTING
AI Systems that affect safety or fundamental human rights
AI systems used in safety-critical product components (Annex I, A) or falling into specific high-risk areas (Annex III)
CONFORMITY ASSESSMENT & MITIGATION MEASURES
GPAI models perform a variety of tasks and are widely distributed, making them highly versatile
Large language models, foundational AI models used across various applications
TRANSPARENCY REQUIREMENTS & GPAI OBLIGATIONS
AI Systems that interact with human end users and pose risk of impersonation & deception
Customer-facing chatbots, emotion-recognition systems, biometric categorization deep fakes
TRANSPARENCY REQUIREMENTS
GPAI models perform a variety of tasks and are widely distributed, making them highly versatile
Large language models, foundational AI models used across various applications
TRANSPARENCY REQUIREMENTS & GPAI OBLIGATIONS
The EU AI Act offers more than regulation; it is a strategic enabler for scaling AI through holistic governance and cross-functional collaboration
Zita Bohn, AI Governance & EU AI Act Lead
While the EU AI Act applies to all AI systems within the EU, only those in highly regulated categories are subject to the most stringent requirements. Determining whether an AI system operates in a high-risk area or serves as a critical safety component is essential, as these systems must comply with the high-risk obligations. Consequently, certain industries may face greater exposure than others.
AI systems that impact safety or fundamental human rights are considered high-risk and are central to the EU AI Act, facing stringent regulations.
AI systems fulfilling a safety function for a product or system listed below are also classified high-risk and face the same obligations as other high risk AI systems if subject to third-party conformity assessments under a different EU regulation.
Industries using AI in high-risk areas or in safety-critical product components are highly exposed to the EU AI Act and must meet strict safety & compliance regulations.
Certain AI systems are excluded from the EU AI Act, including those serving as safety components in certain sectors (Annex I,B).
Evaluate your level of exposure with our EU AI Act Exposure Check:
EXPOSURE ASSESSMENTNon-compliance with the EU AI Act can lead to significant financial penalties and reputational harm, impacting a firm's credibility and market position. Notable financial penalties include:
35 m €
or 7% of global turnover for use of prohibited AI systems
15 m €
or 3% of global turnover for violation of any obligations associated with the outlined roles
7,5 m €
or 1% of global turnover for the supply of incorrect information
AI generated image
The Act is expected to reshape how we think about and manage AI (analog to GDPR), with significant financial and organizational impact for organizations
The EU AI Act entered into force in August 2024 and is now being rolled out in stages. This starts with a transition period allowing businesses time to comply before first enforcement begins in 2025.
Publication of the EU AI Act and entry into force
General provisions (incl. AI Literacy) & ban on AI systems with prohibited risk
Ch. I
Ch. II
Obligations for provisions of new GPAI models & provisions on notifying authorities
Art. 8
Ch. III.4
Ch. V
Ch. VII
Ch. XII
Art. 78
| Full scope of regulation applies |
Majority of obligations for AI systems & roles
Obligations for existing GPAI models (before Q3 25) & high-risk systems under Annex I
Art. 6(1)
Art. 6(1)
Conformity assessments and documentation requirements primarily relevant from a compliance perspective
Leverage synergies through complete transparency across all use cases, eliminating siloed operations and redundant efforts for maximum efficiency
Foster a culture of responsible innovation by enhancing AI quality, safety, and trustworthiness to ensure widespread AI acceptance and adoption
Standardize and redesign AI use case lifecycle in line with EU AI Act to enable successful scaling beyond the pilot phase and accelerate time-to-market
Ensure the right people start preparing for the upcoming regulatory requirements and design your solution for the long-term.
Multidisciplinary taskforce to cover full range of expertise
Enterprise-wide governance framework, standards & processes based on maturity
AI across the entire organization and setup single source of truth
Risk classes of all use cases and start implementing risk mitigation measures
Continous monitoring & oversight to ensure compliance & innovation
With extensive expertise in Data and AI Strategy & Governance, we distil the act to its essentials, ensuring compliance and leveraging our thorough understanding to turn obligations into opportunities for AI-driven innovation and scaling. Our EU AI Act governance framework provides a systematic path to compliance. Guided by the core requirements of the EU AI Act, we establish an enterprise-wide governance for effective identification, risk assessment & monitoring of AI systems to fulfill regulatory requirements.
INNOVATION & SCALING
Strategic workshops and assessments help evaluate your organization's AI governance maturity and exposure to the EU AI Act. Tailored roadmaps are co-created to ensure alignment with regulatory requirements, addressing gaps in readiness and compliance priorities. Stakeholder engagement and integration into the broader Data & AI strategy are essential for long-term impact.
Educational programs are customized to organizational roles, from foundational AI literacy training for all employees to specialized enablement for AI officers and technical teams. These sessions enhance understanding of governance requirements and foster awareness and AI adoption across all levels.
Tailored AI governance frameworks is developed to align with the organization’s unique needs and maturity level. An operating model and organizational structures are designed to leverage existing capabilities while defining clear roles, responsibilities, and processes, supported by a multidisciplinary team at the core.
Leveraging deep expertise to transform compliance obligations into opportunities for AI-driven innovation and growth. By integrating compliance into the AI lifecycle, we enhance transparency, eliminate silos, and standardize processes to accelerate time-to-market. Our approach fosters a culture of innovation, prioritizing AI quality, safety, and trust to drive acceptance and adoption at scale.
Designing a risk-based governance approach to ensure accurate AI system identification, classification, and ongoing monitoring. This involves creating a AI system inventory, a risk assessment leveraging our comprehensive EU AI Act assessment, and a tooling evaluation to enhance automation and streamline processes.
For providers outside the EU, appointing an Authorized Representative (AR) within the EU is mandatory before entering the Union market. We can act as intermediaries, ensuring compliance with EU AI Act requirements. We manage tasks on behalf of the provider, including handling conformity and technical documentation, regulatory declarations, and facilitating cooperation with authorities to ensure seamless alignment with EU regulations.
Download our Governance POV today:
GOVERNANCE POVWant to know more or discuss your individual challenges with us?
GET IN CONTACT!