Pospíšil Petr | CyberPOPE Independent Consultant | Cyber Security Architect & vCISO
EU AI Act / AI Security / Compliance / AI Regulation / AI Pentesting

The EU AI Act Is Already in Force. Here's What It Means for Your Business — and Your AI.

Petr Pospíšil

Most articles about the EU AI Act read like legal filings. This one doesn’t.

By the end of this post you’ll know exactly which tier of the Act applies to your organisation, what you are legally required to do, and — if you build AI — what the security requirements actually mean in practice.

No Latin. No recitals. Let’s go.


The short version first

The EU AI Act classifies every AI system into one of four risk levels. The higher the risk to people, the stricter the rules. The enforcement clock is already ticking — some obligations are already in force as of February 2025. Full application for high-risk systems hits in August 2026.

This article covers three practically distinct situations:

  1. Any organisation using AI — are you running a prohibited system without knowing it?
  2. Any organisation deploying AI in sensitive contexts — what are your obligations as a deployer of high-risk AI?
  3. Any organisation that builds or provides AI models — what do GPAI obligations actually require?

Let’s take each one.


Level 1 — Prohibited AI: What is Banned Outright

Effective since: 2 February 2025 Penalty: up to €35M or 7% of global annual turnover

The Act doesn’t just regulate AI — it prohibits certain uses entirely. These are applications the EU considers incompatible with fundamental rights, dignity, and democratic values. There is no compliance path for them. If your organisation is doing any of the following, you must stop.

The eight prohibited practices (Article 5)

1. Subliminal manipulation AI that influences people’s behaviour through techniques they are not consciously aware of, causing or likely to cause significant harm. The key test is whether the technique bypasses conscious awareness and causes harm — not every persuasive AI is banned, but systems designed to covertly steer behaviour against a person’s own interests are.

2. Exploiting vulnerabilities AI that exploits vulnerabilities specific to a group — due to age, disability, or social or economic situation — to materially distort behaviour in a way that causes significant harm. This covers manipulative targeting of elderly people, children, or people in financial distress.

3. Social scoring by public authorities AI used by public bodies to evaluate or classify people over time based on their social behaviour or personal characteristics, where the resulting score leads to: detrimental treatment in contexts unrelated to where the data was collected, or unjustified and disproportionate treatment relative to the underlying behaviour.

Note the scope: this specifically targets public authorities using AI to create general social scores that then disadvantage people across multiple life areas. It is not a blanket ban on any form of risk scoring. Commercial credit risk assessment based on financial history is explicitly carved out — the prohibition targets behavioural scoring that bleeds across unrelated life contexts.

4. Predictive policing based on profiling AI used to assess whether an individual is likely to commit a crime, based solely on profiling or personality traits rather than on objective evidence tied to specific suspicious activity.

5. Untargeted facial recognition database scraping Building or expanding facial recognition databases by bulk-scraping images from the internet or CCTV footage without a targeted purpose.

6. Facial recognition in publicly accessible spaces (real-time) Deploying live remote biometric identification systems in public spaces for law enforcement purposes. Narrow exceptions exist — imminent terrorism threats, targeted searches for missing persons, prosecution of serious crimes — but they require prior judicial or administrative authorisation and are explicitly limited in scope.

7. Biometric categorisation by sensitive characteristics Using biometric data to infer or deduce a person’s race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.

8. Emotion recognition in the workplace and in educational institutions Deploying emotion recognition systems on employees or students. An exception exists where use is for safety purposes (e.g. detecting fatigue in vehicle operators), but using AI to monitor emotional states for productivity, engagement, or disciplinary purposes is prohibited.

What you need to do

Audit your AI vendors. You are responsible not only for AI you build but for AI you procure and use. Ask vendors directly:

  • Does your product perform emotion recognition on employees or students?
  • Does it use biometric data to infer personal characteristics beyond the stated purpose?
  • Does it create scores that affect people across contexts unrelated to the original data?

If the answer is yes — or “we’re not sure” — that is a compliance and procurement conversation you need to start today.


Level 2 — High-Risk AI: What Deployers and Providers Must Do

Deadline: 2 August 2026 Penalty: up to €15M or 3% of global annual turnover

High-risk AI covers systems with significant potential to affect people’s health, safety, or fundamental rights. If your organisation deploys AI in any of the following areas, you are working with high-risk AI (Annex III):

  • Employment and HR — AI that evaluates candidates, filters applications, ranks candidates, or makes or materially influences decisions about hiring, promotion, or termination
  • Credit and essential services — creditworthiness scoring, loan eligibility, insurance underwriting
  • Healthcare — AI assisting in diagnosis, triage, or treatment recommendations
  • Education — automated assessment of students, exam grading, admissions decisions
  • Critical infrastructure — AI managing energy, water, transport, or financial systems
  • Law enforcement and border control — risk assessment tools used by police or customs

Important nuance on HR tools: Not every AI tool used in recruitment is automatically high-risk. The key test is whether the system makes or materially influences decisions about a person’s employment. A CV ranking tool that determines who gets shortlisted — yes. An AI scheduling assistant that books interview slots — probably not. The deciding factor is impact on the candidate’s career prospects.

Provider obligations vs. deployer obligations

The Act draws a clear line between two roles, and the obligations are different.

Providers — organisations that develop an AI system and place it on the market or into service — carry the primary compliance burden. Providers must implement:

  • Risk management (Article 9) — a documented, ongoing process covering the system’s full lifecycle
  • Data governance (Article 10) — training and test data must be documented, quality-assessed, and monitored for bias
  • Technical documentation (Article 11) — a full technical file covering design decisions, performance metrics, testing results, and intended use
  • Transparency (Article 13) — the system must communicate what it does and how to interpret its outputs
  • Human oversight (Article 14) — humans must be able to understand, override, and stop the system
  • Accuracy, robustness, and cybersecurity (Article 15) — the system must remain resilient against errors, adversarial inputs, and unauthorised exploitation
  • Conformity assessment — before going to market, the system must be assessed (self-assessed for most Annex III categories; third-party assessed for biometric systems)

Deployers (organisations that use a third-party high-risk AI system in their own context) have lighter but still real obligations:

  • Use the system strictly in accordance with the provider’s instructions
  • Assign appropriate human oversight
  • Monitor the system for unexpected behaviour and report issues to the provider
  • Retain logs for at least six months
  • Inform employees or affected persons that AI is being used

A company using an off-the-shelf AI recruitment platform is a deployer — it does not inherit all provider obligations. However, if you substantially modify a system, white-label it, or use it for materially different high-risk purposes than intended, you cross into provider territory.

The security requirement that often gets missed

Article 15 requires high-risk AI to be resilient against: adversarial inputs designed to manipulate outputs, data poisoning, model poisoning, and unauthorised third-party exploitation of system vulnerabilities.

This is not a documentation requirement. It requires the system to actually be resilient — and resilience must be demonstrated, not asserted. For providers building high-risk AI systems, this is the clause that turns security from a good practice into a legal obligation. If something goes wrong and you cannot show you tested for and addressed adversarial vulnerabilities, Article 15 exposure is real.


Level 3 — General Purpose AI (GPAI): The Model Provider’s Obligation

Deadline: 2 August 2025 (already in force) Penalty: up to €15M or 3% of global annual turnover

This section is for a different audience: organisations that train and make available AI models with general capabilities for others to build on.

If you use the OpenAI API to build a product, OpenAI is the GPAI provider — this chapter applies to them, not to you. The GPAI chapter (Articles 51–56) applies to organisations that train and distribute a model that can be used across a wide range of tasks — not to organisations using or fine-tuning existing models for a specific application.

Does fine-tuning make you a GPAI provider? Generally, no. A model fine-tuned for a specific narrow purpose (say, a legal document classifier or a domain-specific chatbot) typically no longer meets the definition of a “general purpose AI model” — which requires “significant generality” and capability across a wide range of distinct tasks. Fine-tuning with substantially less compute than the original training run, for a specific downstream use, is unlikely to meet this threshold. That said, if your organisation fine-tunes a model and continues distributing it for general use, you may still qualify as a GPAI provider for that modified model.

What all GPAI providers must do

  • Technical documentation — architecture, training methodology, data sources, evaluation results
  • Copyright compliance — EU training data provisions apply; no exemption exists for AI training
  • Summary of training data — a sufficiently detailed summary must be published and available for audit by the AI Office

The systemic risk tier

Models trained using more than 10²⁵ floating-point operations (FLOPs) are classified as GPAI with systemic risk. This threshold currently captures the largest frontier models. Additional obligations apply under Article 55:

Model evaluation Standardised assessment of capabilities, limitations, and risks — including risks that emerge from how the model interacts with real-world environments and downstream systems.

Adversarial testing (mandatory red-teaming) Article 55 explicitly requires adversarial testing to identify vulnerabilities, failure modes, and misuse potential. The test must cover how the model behaves under adversarial inputs — not just intended use. This is a legal requirement, not a recommendation.

Cybersecurity measures Adequate protection of the model and its infrastructure against cyberattacks, unauthorised modification, and exfiltration.

Serious incident reporting Incidents or circumventions involving systemic risk must be reported to the AI Office without undue delay.

What this means for AI developers below the systemic risk threshold

The adversarial testing mandate under Article 55 explicitly targets frontier-scale models. But the practical question for any AI developer is separate from the legal threshold: if you are building an AI-powered application — with a RAG pipeline, tool integrations, or an agentic workflow — do you know how it behaves under adversarial conditions?

This matters for three reasons that have nothing to do with whether you are above 10²⁵ FLOPs:

  1. High-risk AI obligations. If your application falls into a high-risk category, Article 15 requires the system to be resilient against adversarial manipulation. Demonstrating that resilience requires testing.

  2. Customer due diligence. Enterprise customers under NIS2 or DORA will increasingly require evidence of security assessment in vendor procurement. “We haven’t tested it” is not a viable answer.

  3. Liability exposure. If an AI system you deployed causes harm and you cannot show you assessed and addressed adversarial vulnerabilities, your legal exposure is significantly greater than if you can.

Traditional security testing was not designed for AI. Automated scanners do not test for prompt injection, indirect manipulation through retrieved documents, or tool call hijacking in agentic systems. AI security assessment requires a different methodology aligned with frameworks like OWASP LLM Top 10 (2025) and OWASP Top 10 for Agentic Applications — the technical standards emerging alongside EU AI Act implementation. See how I approach AI penetration testing →


Timeline cheat sheet

DateWhat enters force
✅ 1 Aug 2024Act entered into force
✅ 2 Feb 2025Prohibited AI practices banned
✅ 2 Aug 2025GPAI obligations apply; systemic risk requirements live
⏳ 2 Aug 2026High-risk AI (Annex III standalone systems): full compliance required
⏳ 2 Aug 2027Extended transition ends for high-risk AI embedded in legacy regulated products (Annex I)

The one thing most organisations get wrong

They treat the Act as a documentation exercise. Produce the paperwork, tick the conformity box, move on.

The Act is not primarily a documentation law. It is a risk law. Documentation is evidence that you identified, assessed, and managed risk. Article 15 does not ask you to file a report about cybersecurity — it asks whether your AI system can actually withstand adversarial conditions. That question requires testing, not writing.

The organisations that will struggle in 2026 are those that documented compliance without doing the underlying work. The ones that will be fine are those that used the Act as an opportunity to genuinely understand what they deployed — how it behaves, where it fails, and what an adversary could do with it.


Further reading & sources

Primary — official EU AI Act text

Official Commission guidance

Technical security frameworks referenced in this article


Petr Pospíšil is a Cyber Security Architect and vCISO. Questions about what the EU AI Act requires for your specific AI deployment? Book a free scoping call.

> Found this useful?

Let's talk about your organisation's security posture.

I work with SMEs across Europe on NIS2 compliance, penetration testing, and security strategy. No jargon, no overselling — just honest advice on what you actually need.