Industry Insights
9 min read

The EU AI Act Starts Enforcing in August. Here is What You Need to Do.

The EU AI Act enforcement begins August 2026. A practical guide for developers building AI features, covering risk classification, documentation requirements, and compliance steps.

Transactional Team
Feb 22, 2026
9 min read
Share
The EU AI Act Starts Enforcing in August. Here is What You Need to Do.

August 2026 Is Five Months Away

The EU AI Act is not a proposal anymore. It is law. The first enforcement provisions went into effect in February 2025 with bans on prohibited AI practices. The big wave, the one that affects most developers, hits in August 2026 when requirements for high-risk AI systems become enforceable.

If you are building AI features that serve European users, or that are deployed by European companies, this applies to you. And "I did not know" is not going to work as a compliance strategy.

This guide distills the regulation into practical terms, informed by the compliance requirements that matter most for teams building AI-powered developer tools. Here is the practical version.

EU AI Act Key Facts

Aug 2026High-Risk Enforcement
35M EURMax Fine (Prohibited)
4 TiersRisk Classification
3-7%Revenue-Based Penalties

The Timeline

The EU AI Act rolled out in phases.

  • February 2025: Prohibited AI practices banned (social scoring, real-time biometric surveillance, etc.)
  • August 2025: Requirements for general-purpose AI models (transparency, copyright compliance)
  • August 2026: High-risk AI system requirements fully enforceable
  • August 2027: Certain product-specific AI systems (medical devices, vehicles)

The August 2026 deadline is the one most developers need to worry about. It covers the broadest set of AI applications.

The Risk Classification System

The Act classifies AI systems into four risk tiers. Your obligations depend on which tier your system falls into.

Unacceptable Risk (Banned)

Already prohibited since February 2025:

  • Social scoring by governments
  • Real-time remote biometric identification in public spaces (with exceptions for law enforcement)
  • Emotion recognition in workplaces and schools
  • AI that exploits vulnerabilities of specific groups
  • Untargeted scraping of facial images for recognition databases

If you are building any of these, stop. The fines are up to 35 million euros or 7% of global revenue.

High Risk

This is where most of the obligations land. AI systems are high-risk if they are used in:

  • Employment: Recruitment, hiring, performance evaluation, promotion decisions
  • Education: Admissions, grading, exam proctoring
  • Critical infrastructure: Energy, water, transport management
  • Law enforcement: Risk assessment, evidence evaluation
  • Migration: Visa processing, border control
  • Access to services: Credit scoring, insurance pricing, emergency services

If your AI feature makes decisions that materially affect people in these domains, it is likely high-risk. The requirements are substantial.

Limited Risk

Systems with specific transparency obligations:

  • Chatbots: Must disclose that the user is interacting with AI
  • Deepfakes: Must be labeled as AI-generated
  • Emotion recognition: Must inform users (where not banned)
  • Content generation: AI-generated text used to inform public on matters of public interest must be labeled

Most customer-facing AI features fall here. The obligations are manageable.

Minimal Risk

Everything else. No specific requirements beyond existing law. This includes AI-powered spam filters, AI-assisted code completion, recommendation engines for entertainment, and most internal productivity tools.

What Developers Building AI Features Need to Do

Here is a breakdown of what actually changes in your development workflow.

For Limited Risk Systems (Most of You)

This covers most chatbots, AI assistants, and AI-powered features.

Transparency requirements:

  • Clearly inform users they are interacting with an AI system
  • Label AI-generated content appropriately
  • Document what AI models you are using and for what purpose

Practical implementation:

  • Add a visible indicator that a response is AI-generated
  • Include disclosures in your terms of service
  • Maintain a registry of AI systems in your product

This is the minimum. Most companies can comply with a few UI changes and documentation updates.

For High Risk Systems (Pay Attention)

If your AI system falls into the high-risk category, the requirements are significantly more demanding.

Risk Management System: You need a documented, living risk management process. Not a one-time assessment. This includes:

  • Identifying and analyzing known and foreseeable risks
  • Estimating and evaluating risks from intended use and foreseeable misuse
  • Adopting risk management measures
  • Testing to ensure risks are addressed

Data Governance: Training, validation, and testing datasets must be:

  • Relevant, representative, and as error-free as possible
  • Appropriately scoped for the intended purpose
  • Examined for possible biases
  • Documented with data sheets describing methodology and characteristics

Technical Documentation: Before placing a high-risk system on the market, you must prepare documentation covering:

  • General description of the system
  • Detailed description of elements and development process
  • Monitoring, functioning, and control information
  • Risk management procedures
  • Changes made throughout the system lifecycle
  • Performance metrics and their limitations
  • Data requirements and data governance measures

Record Keeping: High-risk systems must automatically log:

  • Operating periods and reference databases used
  • Input data for which the system was used
  • Identification of natural persons involved in verification

Human Oversight: The system must be designed to allow effective human oversight:

  • Human operators must be able to understand the system's capabilities and limitations
  • Operators must be able to correctly interpret outputs
  • Operators must be able to override or disregard the system's output
  • There must be a mechanism to stop the system

Accuracy, Robustness, Cybersecurity: The system must achieve appropriate levels of accuracy and be resilient against errors, faults, and attempts to alter its use by unauthorized third parties.

Documentation Requirements in Practice

The documentation requirement is where most teams will struggle. It is not just writing down what your model does. It is maintaining a living document that evolves with your system.

Here is a practical documentation template:

System Card

System Name: [Name]
Version: [Version]
Risk Classification: [High / Limited / Minimal]
Intended Purpose: [Specific use case]
Intended Users: [Who uses this system]
Not Intended For: [Explicit exclusions]

Model Information:
- Provider: [OpenAI / Anthropic / etc.]
- Model: [Specific model version]
- Fine-tuning: [Yes/No, with details]
- Last updated: [Date]

Data Sources:
- Training data: [Description, size, source]
- RAG data: [Description, update frequency]
- User data processed: [Categories]

Performance Metrics:
- Accuracy: [Measured how, on what benchmark]
- Latency: [P50, P95, P99]
- Error rate: [Measured how]
- Known limitations: [List]

Risk Assessment:
- Identified risks: [List with severity]
- Mitigation measures: [For each risk]
- Residual risks: [Accepted risks with justification]

Human Oversight:
- Override mechanism: [How operators can intervene]
- Escalation path: [When and how to escalate]
- Review frequency: [How often outputs are reviewed]

Change Log

Every modification to the system needs to be recorded. Not just code changes. Model version updates, training data changes, prompt modifications, tool additions, and configuration changes.

The Penalty Structure

The fines are graduated based on the violation:

  • Prohibited practices: Up to 35 million euros or 7% of global annual turnover
  • High-risk violations: Up to 15 million euros or 3% of global annual turnover
  • Incorrect information to authorities: Up to 7.5 million euros or 1.5% of global annual turnover

For SMEs and startups, the amounts are the lower of the fixed amount or the percentage. But even the reduced amounts are significant enough to be existential for most startups.

Practical Steps for Compliance

Step 1: Classify Your AI Systems (This Week)

Go through every AI feature in your product. Classify each one:

  • What decisions does it make or influence?
  • Does it affect employment, education, credit, or safety?
  • Is it customer-facing or internal-only?
  • What data does it process?

Most features will be limited or minimal risk. Identify the ones that might be high-risk and prioritize those.

Step 2: Implement Transparency (This Month)

For limited-risk systems:

  • Add AI disclosure labels to chatbot interfaces
  • Update terms of service with AI usage disclosures
  • Label AI-generated content

This is low-effort, high-impact compliance work.

Step 3: Start Documentation (This Quarter)

For high-risk systems:

  • Create system cards for each AI feature
  • Document your model supply chain
  • Set up change logging for AI system modifications
  • Begin risk assessments

Step 4: Build Monitoring (Before August)

  • Implement logging for all AI system interactions
  • Set up performance monitoring with accuracy tracking
  • Create human oversight mechanisms
  • Establish incident response procedures for AI-specific failures

How This Affects API-Based AI

If you are using AI through APIs, like most developers, you still have obligations. The Act applies to deployers, not just providers. You are responsible for:

  • How you use the AI output
  • What data you send to the model
  • The transparency of your AI-powered features
  • The oversight mechanisms you provide

Your LLM provider's compliance does not cover your application's compliance. You need your own documentation, your own risk assessment, and your own monitoring.

This is one of the reasons we built the AI Gateway at Transactional. It provides the logging, monitoring, and audit trail that compliance requires, without you having to build that infrastructure from scratch. Every request is logged with full metadata, every response is auditable, and you get the documentation trail regulators will ask for.

The Takeaway

The EU AI Act is not going away, and it is not going to be delayed again. August 2026 is the deadline. The companies that start compliance work now will find it manageable. The companies that wait until July will find it impossible.

Start with classification. Most of your AI features are probably limited risk with straightforward transparency requirements. Focus your heavy compliance effort on the few systems that are genuinely high-risk. And get your documentation habits in place now, because retroactive documentation is always harder than documenting as you build.

Written by

Transactional Team

Share
Tags:
compliance
ai
regulation

YOUR AGENTS DESERVE
REAL INFRASTRUCTURE.

START BUILDING AGENTS THAT DO REAL WORK.

Deploy Your First Agent