5.9 KiB
| type | title | author | url | date | domain | secondary_domains | format | status | priority | tags | processed_by | processed_date | extraction_model | extraction_notes | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | EU AI Act Article 43: Conformity Assessment is Mostly Self-Assessment, Not Independent Third-Party Evaluation | European Union / EU AI Act (euaiact.com) | https://www.euaiact.com/article/43 | 2024-07-12 | ai-alignment | legislation | null-result | medium |
|
theseus | 2026-03-20 | anthropic/claude-sonnet-4.5 | LLM returned 1 claims, 1 rejected by validator |
Content
Article 43 establishes conformity assessment procedures for high-risk AI systems (not GPAI — high-risk AI is a separate category covering things like medical devices, recruitment systems, law enforcement uses).
Assessment structure:
- For high-risk AI in Annex III point 1 (biometric identification): providers may choose between internal control (self-assessment) OR quality management system assessment with notified body involvement
- For high-risk AI in Annex III points 2-8 (all other categories): internal control (self-assessment) only — no notified body required
- Third-party notified body required ONLY when: harmonized standards don't exist, common specifications unavailable, provider hasn't fully applied relevant standards, or standards published with restrictions
Notified bodies: Third-party conformity assessment organizations designated under the regulation. For law enforcement and immigration uses, the market surveillance authority acts as the notified body.
Key implication: For the vast majority of high-risk AI systems, Article 43 permits self-certification of compliance. The "conformity assessment" of the EU AI Act is predominantly a documentation exercise, not an independent evaluation.
Important distinction from GPAI: Article 43 governs high-risk AI systems (classification by use case); GPAI systemic risk provisions (Articles 51-56) govern models by training compute scale. These are different categories — the biggest frontier models may be GPAI systemic risk WITHOUT being classified as high-risk AI systems, and vice versa. They operate under different regulatory regimes.
Agent Notes
Why this matters: Article 43 is frequently cited as the EU AI Act's "conformity assessment" mechanism, implying independent evaluation. In reality it's self-assessment for almost all high-risk AI, with third-party evaluation as an exception. This matters for understanding whether the EU AI Act creates the "FDA equivalent" that Brundage et al. say is missing. Answer: No, not through Article 43.
What surprised me: The simplicity of the answer. Article 43 ≠ FDA because it allows self-assessment for most cases. The path to any independent evaluation in the EU AI Act runs through Article 92 (compulsory AI Office evaluation), not Article 43 (conformity assessment). These are different mechanisms with different triggers.
What I expected but didn't find: Any requirement that third-party notified bodies verify the actual model behavior, as opposed to reviewing documentation. Even where notified bodies ARE required (Annex III point 1), their role appears to be quality management system review, not independent capability evaluation.
KB connections:
- Previous session finding from Brundage et al. (arXiv:2601.11699): AAL-1 (peak of current voluntary practice) still relies substantially on company-provided information. Article 43 self-assessment is structurally at or below AAL-1.
Extraction hints: This source is better used to CORRECT a potential misunderstanding than to make a new claim. The corrective claim: "EU AI Act conformity assessment under Article 43 primarily permits self-certification — third-party notified body review is the exception, not the rule, applying to a narrow subset of high-risk use cases when harmonized standards don't exist." The path to independent evaluation runs through Article 92, not Article 43.
Context: Article 43 applies to high-risk AI systems (Annex III list: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). GPAI models face a separate and in some ways more stringent regime under Articles 51-56 when they meet the systemic risk threshold.
Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: voluntary safety pledges cannot survive competitive pressure — self-certification under Article 43 has the same structural weakness as voluntary commitments; labs certify their own compliance WHY ARCHIVED: Corrects common misreading of EU AI Act as creating FDA-equivalent independent evaluation via Article 43; clarifies that independent evaluation runs through Article 92 (reactive) not Article 43 (conformity) EXTRACTION HINT: This is primarily a clarifying/corrective source; extractor should check whether any existing KB claims overstate Article 43's independence requirements and note the Article 43 / Article 92 distinction
Key Facts
- EU AI Act Article 43 governs conformity assessment for high-risk AI systems (Annex III categories)
- High-risk AI in Annex III points 2-8 use internal control (self-assessment) only
- High-risk AI in Annex III point 1 (biometric identification) may choose between internal control OR notified body assessment
- Third-party notified body required only when: harmonized standards don't exist, common specifications unavailable, provider hasn't fully applied standards, or standards published with restrictions
- For law enforcement and immigration uses, the market surveillance authority acts as the notified body
- Article 43 applies to high-risk AI systems (classification by use case), distinct from GPAI systemic risk provisions (Articles 51-56) which govern models by training compute scale
- Article 92 provides compulsory AI Office evaluation as a separate mechanism from Article 43 conformity assessment