AI Act
AI Act

Artificial Intelligence Act

The world's first comprehensive AI law — because someone had to go first

The AI Act is the world's first comprehensive legal framework for artificial intelligence. It takes a risk-based approach: the higher the risk your AI system poses, the stricter the rules. From prohibited practices (like social scoring) to minimal requirements for low-risk chatbots, this regulation covers the full spectrum of AI applications.

Scope

Any AI system placed on the EU market or whose output is used in the EU — regardless of where the provider is established. If your AI touches EU citizens, the AI Act touches you.

Geographic reach

EU-wide, with extraterritorial reach for non-EU providers whose AI systems are used within the EU.

In effect since

1 August 2024 (phased implementation: prohibitions from Feb 2025, high-risk obligations from Aug 2026, full application by Aug 2027)

Purpose

To ensure AI systems in the EU are safe, transparent, and respect fundamental rights. Also to create legal certainty for AI developers and build public trust. Because 'trust us, the algorithm is fine' wasn't cutting it anymore.

Jump to your role:

AI Provider

You develop or commission an AI system and place it on the market or put it into service under your own name. You're the one who built the thing — or at least the one whose name is on the box. If you trained the model, designed the system, or put your brand on it, you're the provider.

Your obligations

  • Classify your AI system's risk level: unacceptable, high, limited, or minimal (Art. 6, Annex III)
  • For high-risk AI: implement a quality management system covering the entire lifecycle (Art. 17)
  • Conduct a conformity assessment before placing the system on the market (Art. 43)
  • Register high-risk AI systems in the EU database (Art. 49)
  • Provide transparent information to deployers — including capabilities, limitations, and intended purpose (Art. 13)
  • Implement risk management throughout the AI system's lifecycle (Art. 9)
  • Ensure training data governance: relevance, representativeness, and freedom from bias (Art. 10)
  • Enable human oversight of the AI system's operation (Art. 14)
  • Maintain technical documentation and logs (Art. 11–12)
  • Report serious incidents to authorities (Art. 62)

Key articles

Art. 6 — Classification rulesArt. 9 — Risk managementArt. 10 — Data governanceArt. 13 — TransparencyArt. 14 — Human oversightArt. 17 — Quality managementArt. 43 — Conformity assessmentArt. 49 — EU database registrationAnnex III — High-risk AI systems
Pro tip

Don't wait for August 2026 to classify your AI systems. Start now — the classification determines everything else you need to do. A spreadsheet today saves a panic later.

AI Deployer

You use an AI system under your authority — for professional purposes, not as an end consumer. If you bought or licensed an AI tool and deployed it in your organisation's workflow, you're a deployer. You didn't build the model, but you chose to use it, and that comes with responsibilities.

Your obligations

  • Use the AI system in accordance with the provider's instructions (Art. 26.1) — the manual exists for a reason
  • Ensure human oversight by competent personnel (Art. 26.2)
  • Monitor the AI system's operation and report malfunctions to the provider (Art. 26.5)
  • Conduct a Fundamental Rights Impact Assessment (FRIA) for high-risk AI in certain sectors (Art. 27)
  • Inform individuals that they are subject to AI decision-making (Art. 26.7) — especially for emotion recognition or biometric categorisation
  • Keep logs generated by the high-risk AI system for at least 6 months (Art. 26.6)
  • Cooperate with national authorities when requested (Art. 26.8)

Key articles

Art. 26 — Deployer obligationsArt. 27 — Fundamental rights impact assessmentArt. 86 — Right to explanation
Pro tip

When evaluating AI vendors, ask them directly: 'What risk level is this system classified as under the AI Act, and can you provide the conformity documentation?' If they look confused, that tells you something.

AI Importer

You bring AI systems from outside the EU onto the EU market. You're the gateway — if a non-EU provider wants to sell AI in Europe, you're often the one making it happen. With that comes the responsibility to verify that what you're importing actually meets EU requirements.

Your obligations

  • Verify that the provider has carried out the conformity assessment (Art. 23.1)
  • Ensure the AI system bears the CE marking and is accompanied by required documentation (Art. 23.1)
  • Verify that the provider has appointed an authorised representative in the EU (Art. 23.2)
  • Do not place a system on the market if you have reason to believe it doesn't comply (Art. 23.3)
  • Ensure storage and transport conditions don't compromise compliance (Art. 23.4)
  • Keep a copy of the EU declaration of conformity for 10 years (Art. 23.5)

Key articles

Art. 23 — Importer obligationsArt. 43 — Conformity assessmentArt. 47 — EU declaration of conformityArt. 48 — CE marking
Pro tip

Build compliance checks into your import process from day one. Having a checklist is cheaper than having a recalled AI system.

AI Distributor

You make AI systems available on the market without being the provider or importer. Think: reseller, marketplace, or integration partner. You're in the supply chain, and the AI Act wants everyone in the chain to do their part.

Your obligations

  • Verify that the AI system bears the CE marking and has required documentation (Art. 24.1)
  • Do not make a system available if you have reason to believe it doesn't comply (Art. 24.2)
  • Ensure storage and transport conditions don't compromise compliance (Art. 24.3)
  • Inform the provider or importer if you believe the system presents a risk (Art. 24.4)
  • Cooperate with competent authorities and provide information when requested (Art. 24.5)

Key articles

Art. 24 — Distributor obligationsArt. 25 — Responsibilities along the AI value chain
Pro tip

Keep records of your AI supply chain. Knowing exactly who provided what, when, and with which documentation will save you headaches when (not if) a regulator asks.

How Euregas can help

Available tools

  • AI system register — catalogue all AI systems in your organisation with metadata and risk levels
  • Risk classification (4 levels) — structured assessment aligned with Annex III categories
  • FRIA template — 12-question Fundamental Rights Impact Assessment
  • Compliance roadmap per risk level — step-by-step guidance tailored to your classification result

AI-assisted features

  • AI-driven risk classification — automated Annex III matching using RAG with confidence scores and contributing factors
  • Consultation wizard (ai_act_classification) — 5-step guided classification with AI analysis per step
  • Semantic search across AI Act articles and recitals
Note

The AI Act module has Euregas's strongest AI integration for any single regulation. The risk classification uses RAG-based analysis to match your AI system against Annex III categories.

All examples are fictional and for illustrative purposes only.