Twenty-five officer roles, all live todayArt. 33 GDPR, 72 hours to report a breach93 controls under ISO/IEC 27001:202237 ready-to-run audit templates in the workspace§ 130 OWiG, supervisory duty of the management boardOfficer appointment letter, signed, filed, evidencedOne workspace for tasks, trainings, audits, documentationDIN 14095 fire protection plans, standardisedEU AI Act, the first horizontal AI regulation worldwideTwenty-five officer roles, all live todayArt. 33 GDPR, 72 hours to report a breach93 controls under ISO/IEC 27001:202237 ready-to-run audit templates in the workspace§ 130 OWiG, supervisory duty of the management boardOfficer appointment letter, signed, filed, evidencedOne workspace for tasks, trainings, audits, documentationDIN 14095 fire protection plans, standardisedEU AI Act, the first horizontal AI regulation worldwide
Photo: Steve Johnson on Unsplash
AI & Regulation8 April 202612 min read

EU AI Act: Compliance Obligations for Enterprises From August 2026

By CIVAC Editorial12 min read

On 2 August 2026, the main application date of Regulation (EU) 2024/1689 kicks in. Any organisation that develops, distributes or deploys high-risk AI needs a risk-management system under Art. 9, data governance under Art. 10 and human oversight under Art. 14, and AI-literacy training for every employee working with AI.

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (the AI Act) entered into force on 1 August 2024. It is the world's first comprehensive horizontal AI regulation and, as a regulation, applies directly in all EU Member States without national transposition. Application is staggered, however, and the decisive date is not in the past, but right around the corner: on 2 August 2026, the main application starts, the obligations for high-risk AI systems, the governance framework and the national market surveillance structures.

Key dates at a glance

DateWhat enters into application
1 August 2024AI Act enters into force (published in the EU Official Journal).
2 February 2025Prohibited AI practices (Art. 5) and the AI-literacy obligation (Art. 4) apply.
2 August 2025Obligations for providers of General-Purpose AI models (GPAI, Art. 51 et seq.) and national authority designations.
2 August 2026Main application, notably obligations for Annex III high-risk AI systems, governance and fines.
2 August 2027High-risk AI systems that are safety components of regulated products (Annex I); obligations for GPAI models already on the market before 2025.

Risk classes and scope

The AI Act follows a risk-based approach. Every AI system falls into one of four categories, with regulatory intensity decreasing down the stack:

  • Prohibited practices (Art. 5): among others, subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, emotion recognition in the workplace or in education, untargeted scraping of facial images from the internet, and real-time remote biometric identification in public spaces (with narrow exceptions).
  • High-risk AI (Art. 6 in conjunction with Annex I/III): AI as a safety component of regulated products (e.g. medical devices, machinery) and stand-alone AI systems in eight domains, biometric identification, critical infrastructure, education, employment (e.g. CV screening), essential private and public services (e.g. creditworthiness), law enforcement, migration and border control, justice and democratic processes.
  • Limited risk (Art. 50): transparency obligations for specific systems, chatbots must disclose that they are AI, deepfakes and AI-generated content must be labelled, emotion-recognition systems must disclose that they are active.
  • Minimal risk: all other AI systems, here only the general AI-literacy obligation applies, no product-specific requirements.

Obligations for high-risk AI systems

The core duties for providers of high-risk AI systems are set out in Articles 9 to 15, a coherent management system reminiscent of ISO 9001 and ISO/IEC 27001:

  • Art. 9 Risk management system, continuous, documented process across the full system lifecycle.
  • Art. 10 Data and data governance, requirements for training, validation and test data (relevance, representativeness, freedom from errors, data governance in the narrower sense).
  • Art. 11 Technical documentation, to be drawn up before placing on the market and kept up to date; minimum contents in Annex IV.
  • Art. 12 Record-keeping, automatic logging of events over the operational lifetime.
  • Art. 13 Transparency and information for deployers, instructions for use, limits, accuracy, human oversight.
  • Art. 14 Human oversight, system-side measures that enable effective oversight by natural persons.
  • Art. 15 Accuracy, robustness and cybersecurity, appropriate performance levels, declared in the instructions for use.

On top of these, there are obligations for a quality management system (Art. 17), a conformity assessment (Art. 43), CE marking (Art. 48) and registration in the EU database for high-risk AI (Art. 49). Deployers, companies that use a high-risk system in their own name, have their own duties under Art. 26: follow the instructions for use, assign human oversight, monitor input data, keep logs, inform affected persons, and, in certain constellations, carry out a Fundamental Rights Impact Assessment (FRIA, Art. 27).

AI-literacy obligation (Art. 4): since February 2025

Often overlooked but already applicable since 2 February 2025: Art. 4 obliges providers and deployers of AI systems to ensure a sufficient level of AI literacy among their staff and other persons dealing, on their behalf, with the operation and use of AI systems. The wording is deliberately broad: it addresses all employees working with AI, not only the development function. The concrete training scope must take into account the technical knowledge, experience, education and application context of the persons concerned.

General-Purpose AI models (GPAI): since August 2025

Providers of General-Purpose AI models (the GPT-, Claude-, Gemini-class models) have been subject to their own obligations since 2 August 2025 (Art. 51–55): technical documentation, copyright policy, training-data summary, cooperation with authorities. For GPAI models with systemic risk, additional obligations apply, model evaluation, risk mitigation, reporting of serious incidents, cybersecurity protection.

Concrete implementation steps

  1. 1Build an AI inventory, capture every AI system in use (in-house, bought in, embedded in SaaS), including purpose, data categories and vendor information.
  2. 2Classify by role and risk, for each system, ask: provider or deployer? High-risk (Annex III)? Prohibited (Art. 5)? Transparency obligation (Art. 50)?
  3. 3Roll out an AI-literacy programme, audience-specific training for engineering, business units and leadership; per-employee evidence of completion.
  4. 4For high-risk AI: establish risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11) and human oversight (Art. 14), and have them in place before 2 August 2026.
  5. 5Operationalise deployer duties, logs, monitoring of input data, FRIA (Art. 27) where applicable.
  6. 6Clarify the interplay with GDPR, NIS-2 and sector-specific regulation, in particular, cleanly separate and link DPIA (Art. 35 GDPR) and FRIA (Art. 27 AI Act).
  7. 7Anchor AI compliance governance, a dedicated role (AI compliance officer) at the intersection of data protection, IT security and compliance, with a clear mandate and resources.

Turn this into a mandate.

Let us carry the operational weight. External officer, templates and documentation in one workspace. No obligation.

Related articles