For regulated financial institutions using AI in credit decisions, insurance pricing or HR screening, the EU AI Act is a compliance obligation that sits on the same desk as your AML and GDPR frameworks. We classify your AI systems, determine your obligations as a provider or deployer, and deliver the documentation your firm needs to be legally defensible.
Annex III is defined by impact on natural persons in specific contexts. If you build or deploy AI in any of these sectors, you are likely a provider or deployer with active legal obligations now.
Credit scoring and creditworthiness AI is explicitly listed under Annex III, point 5(b). Any fintech using AI to determine loan eligibility is likely a high-risk provider or deployer.
Annex III, Point 5(b)AI for risk assessment and pricing in life and health insurance falls under Annex III, point 5(c). Both providers and deployers carry full obligations including a mandatory FRIA under Article 27.
Annex III, Point 5(c) · FRIA mandatoryAI used to screen, score, or filter candidates falls under Annex III, point 4(a). If you build or deploy AI-powered ATS or candidate ranking tools, your compliance window is open now.
Annex III, Point 4(a)RegTech companies building AI for regulated clients are providers with full Chapter III obligations. Enterprise procurement increasingly demands AI Act compliance evidence as a condition of contract.
Chapter III Provider ObligationsUnder MiCA and growing investor scrutiny, AI governance is a due diligence standard. If you have existing AML/compliance infrastructure, AI Act readiness is a natural and urgent extension.
MiCA + AI Act · Investor DD driverThe EU AI Act applies extraterritorially, where AI outputs affect EU residents, not where the provider is based. US, UK, and APAC companies with EU-facing AI products are fully in scope.
Article 2 · Extraterritorial scope · Article 22 Authorised RepMost companies act when a specific commercial pressure hits, a funding round requiring AI governance evidence, an enterprise contract conditional on compliance, or a product launch with a regulatory deadline. These are the situations where we can help fastest.
If you develop an AI system and place it on the EU market under your own name, even if built on a third-party model, you are a provider. Full Chapter III obligations apply: technical documentation, QMS, conformity assessment, EU database registration.
If you use a third-party AI system in a professional context, an AI credit tool, an ATS from a vendor, you are a deployer. Lighter but significant obligations apply: human oversight, logging, FRIA (for credit and insurance AI), and staff notification.
The distinction is not always obvious. Fine-tuning or substantially configuring a third-party model may make you a provider under Article 25, a legal obligation under Article 6(4) that requires legal judgment, not a template.
Get a classification opinionSeries B/C closes and M&A transactions increasingly require AI governance evidence. Non-compliance discovered in due diligence can block or reprice a round. This is the strongest commercial driver we see.
A large customer sends a vendor questionnaire requiring EU AI Act compliance evidence. This is deal-blocking and immediate. We have rapid-response offerings for exactly this situation.
Launching an AI-powered product in a regulated context without prior classification and compliance is a legal exposure. Pre-launch compliance review is a non-negotiable entry condition for Annex III systems.
The AI Act applies wherever outputs affect EU residents. US, UK, and APAC companies expanding into the EU need scope assessment, compliance implementation, and potentially an authorised representative under Article 22.
Contact from a national competent authority is an emergency. We provide rapid-response gap assessment and remediation support with priority response times.
When the board directs legal or compliance to address AI regulatory risk, you need structured deliverables: a classification opinion, gap report and governance framework. Not internal research.
A bias finding, a data error, or an AI-driven decision that harms a customer. Post-incident, Article 73 reporting obligations trigger and a documented remediation plan becomes legally necessary.
Use the free EU AI Act Compliance Checker from the Future of Life Institute as a first orientation, then come to us for the legally defensible written opinion. Open the free tool →
They understood our AI systems technically and translated that into a regulatory programme our board could approve and our engineers could execute. We were audit-ready within eleven weeks of engagement.
Law firm rates typically start at €500/hour and their output is legal opinion: valuable, but not the operational compliance programme your organisation needs to implement. We deliver fixed-fee, board-ready programmes at a price point accessible to scale-ups and mid-market companies.
Credit scoring, insurance pricing AI, recruitment systems, and AI in essential services fall under Annex III high-risk categories. Full conformity assessment is mandatory before deployment or continued operation. Fraud detection AI is explicitly excluded, but the boundary requires legal analysis.
Regulated-sector companies using AI don't face a single regulation, they face a stack of overlapping frameworks, each with its own obligations, timelines, and enforcement bodies. Most firms advise on one framework at a time. That creates gaps between advisors, conflicting controls, and compliance work done twice.
We handle EU AI Act, GDPR, AML, DORA, DPFT, and MiCA as a single integrated engagement, mapping obligations across every relevant framework, consolidating overlapping work, and producing a unified compliance architecture, not a collection of siloed reports.
For fintech and crypto clients in particular: your AI systems touch AML transaction monitoring, GDPR data flows, DORA operational resilience, and EU AI Act classification simultaneously. Handling these separately creates legal exposure at every seam. We close those seams.
Automated decision-making (Art. 22), DPIAs (Art. 35), lawful basis for AI training data, data minimisation. GDPR obligations run alongside and partially overlap with EU AI Act FRIA and transparency requirements.
AML fraud detection AI is explicitly carved out of Annex III high-risk classification, but this boundary requires legal analysis. AML obligations interact with AI Act data governance and human oversight requirements.
DORA's ICT risk management, third-party oversight, and operational resilience requirements overlap significantly with EU AI Act QMS (Art. 17), post-market monitoring (Art. 72), and incident reporting (Art. 73).
Sector-specific data protection in financial services, PSD2, open banking, national supervisory guidance, interacts with both GDPR and EU AI Act transparency and data governance requirements.
As crypto firms professionalise under MiCA, AI governance is becoming an investor and regulatory expectation. MiCA operational requirements complement AI Act obligations for crypto firms using AI in trading or risk assessment.
ISO 42001 certification is increasingly requested by enterprise procurement alongside EU AI Act compliance. We build governance frameworks that simultaneously satisfy ISO 42001 and EU AI Act QMS obligations.
Not sure which frameworks apply? We map your full regulatory stack in the initial scoping call, at no cost.
Book Free CallMost organisations approaching EU AI Act compliance for the first time have reasonable-sounding assumptions about what the work involves. Almost all underestimate it, not from negligence, but because the regulation is technically and legally demanding in ways that only become clear on close reading. These are the gaps we close in every engagement.
Most teams assume AI Act compliance is a tick-box exercise, fill in a form, confirm a few answers, done.
Under Article 6(4), any provider who considers their Annex III system is not high-risk must document that assessment before placing it on the market. An incorrect classification is itself a compliance violation.
Article 6(4) · Annex IIIA widespread assumption, and one that creates real legal exposure. Teams believe that using a third-party AI product passes all obligations to the provider.
Article 26 requires deployers to assign human oversight, maintain logs for at least 6 months, monitor the system, and notify affected persons. For credit and insurance AI, deployers must also conduct a FRIA under Article 27, notified to the national authority.
Article 26 · Article 27 (FRIA)Teams with existing GDPR programmes often assume their DPIAs carry over. A logical assumption, but legally incorrect.
The FRIA under Article 27 covers fundamental rights beyond data protection, discrimination risks, access to services, democratic rights. A GDPR DPIA partially satisfies FRIA requirements but not fully. Both must exist as distinct documents.
Article 27 · GDPR Art. 35Many teams have heard about the proposed Digital Omnibus delay and are using it to defer compliance planning entirely.
COM(2025) 836 proposed extending Annex III obligations to December 2027, but as of March 2026 it has not been adopted. Even if it passes, it extends the window, it does not remove the obligation. Commercial deadlines from investors and procurement exist regardless.
COM(2025) 836 · Current deadline: Aug 2026Teams often assume internal-facing AI tools are outside scope because end users don't interact with them directly.
AI used to assess creditworthiness, determine insurance pricing, or screen job candidates is high-risk regardless of whether the affected person interacts with the AI directly.
Annex III · Article 6Compliance teams often propose a board AI policy as the primary deliverable, a reasonable starting point, but one that satisfies none of the legal obligations under Articles 9, 17, or Annex IV.
Article 9 requires a documented, iterative risk management system. Article 17 requires a full QMS. Annex IV specifies a detailed technical documentation file per system, required before market placement. A board AI policy satisfies none of these.
Article 9 · Article 17 · Annex IVRecognise any of these conversations? These are the assumptions we address in every initial scoping call, before any engagement begins. The free call costs nothing. Understanding your actual legal position is the only way to build a compliance programme that holds up.
Book the Free CallEvery engagement begins with a complimentary 30-minute scoping call. We will tell you candidly which programme fits your situation, no obligation to proceed. Minimum engagement €5,500.
Audit of current AI practices against Article 5 prohibited AI and the Article 4 AI literacy obligation, both already in force.
Written legal classification opinion per AI system, provider vs. deployer role, Annex III scope, and regulatory obligations matrix.
End-to-end Annex III compliance for high-risk AI providers and deployers, from classification through to conformity declaration.
Ongoing AI compliance governance, regulatory watch, post-market monitoring, incident support, and investor inquiry responses.
Every regulated-sector AI profile has a different mix of Annex III categories, provider vs. deployer status, and cross-regulatory interactions. This index maps the key obligations by sector.
Credit scoring and creditworthiness AI is explicitly listed under Annex III, point 5(b). Both providers (building the AI) and deployers (using a vendor's credit tool) have mandatory obligations. Deployers must conduct a FRIA and maintain logs for at least 6 months. Fraud detection AI has a specific carve-out, but the boundary is not self-evident and requires legal analysis to confirm.
AI for risk assessment and pricing in life and health insurance falls under Annex III, point 5(c). Both providers and deployers carry full obligations including the mandatory Fundamental Rights Impact Assessment (FRIA) under Article 27.
AI used to screen, filter, or rank job candidates falls under Annex III, point 4(a). Any ATS with AI-driven candidate scoring is likely a high-risk provider. Employers deploying such tools are deployers with Article 26 obligations including mandatory worker notification.
RegTech companies building AI-powered compliance tools for regulated clients become providers with full Chapter III obligations. Enterprise procurement increasingly requires AI Act compliance evidence as a direct commercial driver.
Most crypto AI (AML/fraud detection) is carved out of Annex III high-risk, but MiCA compliance culture and investor due diligence are driving AI governance spend. We position AI Act readiness as an extension of your existing AML compliance architecture.
AIFMs and regulated investment funds increasingly use AI in portfolio screening, investor suitability assessment and risk monitoring. Suitability and creditworthiness AI under Annex III point 5(b) may apply depending on the tool and deployment context. AIFMD and MiFID II obligations also interact with AI Act data governance and transparency requirements for fund managers using AI in client-facing decisions.
The EU AI Act is extraterritorial, it applies where AI outputs affect EU residents, regardless of where the company is based. US, UK, Israeli, and APAC companies with EU-facing AI products are in scope. We handle scope assessment, compliance implementation, and EU authorised representative appointments under Article 22.
SaaS platforms embedding AI features into products used by regulated clients in credit, insurance or HR contexts may be providers under Article 3(3), regardless of whether they market the tool explicitly as AI. Enterprise procurement from regulated clients now routinely requires AI Act compliance evidence as a contract condition.
A structured four-phase engagement designed for speed without shortcuts. Most mandates achieve initial compliance readiness within 8 to 12 weeks. All outputs are immediately usable as regulatory evidence and investor-grade documentation.
Map your AI systems, intended purposes, data flows, and full regulatory exposure. Determine provider vs. deployer status. Identify Annex III scope and immediate obligations.
Written classification opinion per AI system. Full gap analysis against applicable obligations ranked by legal severity, with regulatory citations and owner assignments.
Embedded alongside your legal, compliance, and engineering teams. We draft Annex IV technical documentation, design the QMS and risk management system, prepare FRIA where applicable, and deliver staff AI literacy training.
Conformity declaration, EU database registration, regulatory watch programme, and board-level reporting. Ongoing post-market monitoring (Art. 72) and incident support (Art. 73).
Our Series B investor asked for AI governance evidence six weeks before close. The team produced a classification opinion, gap analysis, and interim governance framework in three weeks. The diligence process completed without an AI Act condition. That engagement paid for itself many times over.
We were uncertain whether our AI pricing tool made us a provider or a deployer. That classification question was blocking our entire compliance programme. A written legal opinion was delivered in two weeks and gave our legal team the foundation to proceed.
A large enterprise client required EU AI Act compliance evidence as a procurement condition. We had eight weeks. The team scoped, classified, and produced the required documentation on time. We retained the contract.
Can't find your answer? Contact us directly, we respond within one business day.
Send us a brief message and a senior advisor will respond within one business day, with an honest assessment of what you need and what it will cost. No sales team. No obligation.
EU-based KYC and ODD analyst teams for EMIs, payment institutions and fintechs. Operational in two weeks. Fixed monthly rate, no placement fees.
View service →Interim, fractional and licence-stage compliance officer provision. MLRO for licence applications, gap cover after departure, Deputy MLRO resilience.
View service →Independent AML effectiveness audit and programme design. Written findings, risk-rated recommendations. Fixed fee agreed before engagement starts.
View service →AML programme build and KYC teams for crypto CASPs under MiCA. Travel Rule implementation, AMLCO support and NCA pre-registration preparation.
View service →