EU AI Act

EU AI Act Annex III: which financial services AI is actually high-risk?

By Scanlex  ·  30 March 2026  ·  9 min read

The EU AI Act is now in force. Annex III sets out the categories of AI systems classified as high-risk, and several of them map directly to the AI tools that regulated financial institutions are already using or building. Understanding whether your AI systems fall within scope is the first compliance step, and for many firms it is not as straightforward as it seems.

This article explains which AI systems in financial services are explicitly listed in Annex III, how the provider vs deployer distinction affects your obligations, what the AML fraud detection carve-out actually says, and what timeline you are working against.

What Annex III actually says about financial services AI

Annex III lists eight categories of high-risk AI. Three of them directly affect regulated financial institutions.

Annex III Category What it covers Scope
Point 5(b): AI in creditworthiness assessment and credit scoring AI systems used to evaluate the creditworthiness of natural persons or to establish their credit score. Applies to lending, BNPL, credit card and overdraft decisioning AI. In scope
Point 5(c): AI in life and health insurance risk and pricing AI systems used for risk assessment and pricing decisions in life insurance and health insurance products. In scope
Point 4(a): AI in employment and recruitment AI used to screen, filter, rank or select job candidates. Applies to HR tech and ATS providers whose tools process EU-based candidates. In scope
AML fraud detection AI AI used specifically for fraud detection and AML transaction monitoring. Explicitly carved out of Annex III high-risk classification under the financial services recitals. Carved out
General AI features (chatbots, summarisation, recommendations) AI tools that do not make decisions affecting natural persons in the listed contexts. Subject only to transparency obligations under Article 50 if they involve interaction with humans. Limited risk only

The boundary question that matters most for fintechs: Many fintech AI systems touch both credit scoring and fraud detection. The carve-out for AML fraud detection AI does not automatically cover a system that also scores creditworthiness. If the same model influences both a fraud flag and a credit decision, the credit scoring function brings it into Annex III scope. The boundary requires legal analysis, not a general assumption that "fraud AI is exempt."

Provider vs deployer: why the distinction matters

The EU AI Act divides obligations between two roles: providers (who develop and place AI systems on the market) and deployers (who use third-party AI systems in a professional context). The obligations are significantly different.

Provider obligations (Chapter III)

You are a provider if you develop an AI system and place it on the EU market under your own name, even if built on a third-party foundation model. Full Chapter III obligations apply: technical documentation (Annex IV), quality management system (Article 17), conformity assessment, EU database registration, and ongoing post-market monitoring (Article 72).

Deployer obligations (Article 26)

You are a deployer if you use a third-party AI system in a professional context, such as a vendor credit tool or an external ATS. Obligations are lighter but significant: human oversight procedures, logging and record-keeping, Fundamental Rights Impact Assessment under Article 27 (for credit and insurance AI), staff AI literacy training, and notification to affected workers (for HR AI).

The provider/deployer boundary is not always obvious. Fine-tuning a third-party model on your own data, substantially configuring an AI tool, or combining multiple AI components into a system you place on the market under your name may make you a provider under Article 25, regardless of whether the underlying components were built by someone else. This determination requires legal analysis applied to the specific facts of your AI development and deployment.

The AML fraud detection carve-out: what it actually says

Recital 56 of the EU AI Act provides that AI systems used specifically for the purpose of detecting financial fraud and AML transaction monitoring are not classified as high-risk under Annex III point 5. This carve-out is widely referenced but frequently misapplied.

What the carve-out covers: AI transaction monitoring systems, AI-assisted sanctions screening, and AI fraud detection tools used purely in the context of AML/CFT compliance functions.

What the carve-out does not cover:

The carve-out requires that the system is used specifically for AML/fraud detection, not that it incidentally touches AML or fraud. A system with multiple outputs, only one of which is a fraud flag, is not automatically carved out on the basis of that one output.

What high-risk classification means: the full obligation set

For a system classified as high-risk under Annex III, the following obligations apply before deployment (or before continued operation if the system was already deployed when obligations came into force):

The enforcement timeline

When obligations apply

The EU AI Act entered into force in August 2024. Article 5 (prohibited AI) and Article 4 (AI literacy) obligations applied from August 2025.

Annex III high-risk obligations (the full Chapter III compliance set) apply from August 2026. For AI systems already in operation at that date, providers and deployers must have completed conformity assessments and produced required documentation by that date or face enforcement.

Post-market monitoring and incident reporting obligations (Articles 72-73) also apply from August 2026 and are ongoing.

August 2026 is not a distant deadline. Building a complete Annex III compliance programme, including technical documentation, QMS, FRIA, and conformity assessment preparation, typically takes 8 to 16 weeks once a decision to proceed is made. Firms that have not started the classification and scoping process have less runway than they may assume.

The GDPR and AML interaction: why integrated advice matters

For regulated financial institutions, the EU AI Act does not exist in isolation. Credit scoring AI interacts with GDPR Article 22 (automated individual decision-making). Insurance pricing AI interacts with GDPR legitimate interest assessments and DORA operational resilience requirements. AML transaction monitoring AI sits at the boundary of the Annex III carve-out and must be assessed against the specific functions the system performs.

Handling these frameworks through separate advisors, one for AI Act, one for GDPR, one for AML, creates gaps at every seam. A classification opinion that ignores the GDPR interaction is incomplete. An AI Act FRIA conducted without understanding the AML implications of the AI system being assessed will miss material obligations.

AI compliance advisory for regulated financial institutions

Scanlex provides AI classification opinions, FRIA and cross-regulatory advisory for regulated financial institutions using AI in credit decisions, insurance pricing or HR screening. Integrated with your existing AML and GDPR framework. Written classification opinion delivered within two weeks of engagement confirmation.

View AI Compliance service →

For regulated firms that also need AML compliance support alongside AI compliance advisory, see our KYC / ODD Team Outsourcing and AML Audit and Advisory services.

Get in Touch

Tell us your situation.
We will respond within
one business day.

A senior advisor will review your details and come back with an honest assessment of which service fits your situation. No obligation.

EU-based analysts and compliance officers
KYC/ODD teams operational from two weeks
Fixed fees agreed in writing before engagement
Response within one business day guaranteed

We respond within one business day · GDPR-compliant · Data never shared with third parties