pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
75c4fea263
commit
9ce036734a
1 changed files with 88 additions and 0 deletions
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
type: source
|
||||
title: "EU AI Act Annex III High-Risk Classification — Healthcare AI Mandatory Compliance by August 2, 2026"
|
||||
author: "European Commission / EU Official Sources"
|
||||
url: https://educolifesciences.com/the-eu-ai-act-and-medical-devices-what-medtech-companies-must-do-before-august-2026/
|
||||
date: 2026-01-01
|
||||
domain: health
|
||||
secondary_domains: [ai-alignment]
|
||||
format: regulatory document
|
||||
status: processed
|
||||
priority: high
|
||||
tags: [eu-ai-act, regulatory, clinical-ai-safety, high-risk-ai, healthcare-compliance, transparency, human-oversight, belief-3, belief-5]
|
||||
processed_by: vida
|
||||
processed_date: 2026-03-23
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "LLM returned 2 claims, 2 rejected by validator"
|
||||
---
|
||||
|
||||
## Content
|
||||
|
||||
The EU AI Act (formally "Regulation (EU) 2024/1689") establishes a risk-based classification for AI systems. Healthcare AI is classified as **high-risk** under Annex III and Article 6. The compliance timeline:
|
||||
|
||||
**Key dates:**
|
||||
- **February 2, 2025:** AI Act entered into force (12 months of grace period began)
|
||||
- **August 2, 2026:** Full Annex III high-risk AI system obligations apply to new deployments or significantly changed systems
|
||||
- **August 2, 2027:** Full manufacturer obligations for all high-risk AI systems (including pre-August 2026 deployments)
|
||||
|
||||
**Core obligations for healthcare AI (Annex III, effective August 2, 2026):**
|
||||
1. **Risk management system** — must operate throughout the AI system's lifecycle, documented and maintained
|
||||
2. **Mandatory human oversight** — "meaningful human oversight" is a core compliance requirement, not optional; must be designed into the system, not merely stated in documentation
|
||||
3. **Training data governance** — datasets must be "well-documented, representative, and sufficient in quality"; data governance documentation required
|
||||
4. **EU database registration** — high-risk AI systems must be registered in the EU AI Act database before being placed on the EU market; registration is public
|
||||
5. **Transparency to users** — instructions for use, limitations, performance characteristics must be disclosed
|
||||
6. **Fundamental rights impact** — breaches of fundamental rights protections (including health equity/non-discrimination) must be reported
|
||||
|
||||
**For clinical AI tools (OE-type systems) specifically:**
|
||||
- AI systems used as "safety components in medical devices or in healthcare settings" qualify as Annex III high-risk
|
||||
- This likely covers clinical decision support tools deployed in clinical workflows (e.g., EHR-embedded tools like OE's Sutter Health integration)
|
||||
- Dataset documentation requirement effectively mandates disclosure of training data composition and governance
|
||||
- Transparency requirement would mandate disclosure of performance characteristics — including safety benchmarks like NOHARM scores
|
||||
|
||||
**NHS England DTAC Version 2 (related UK standard):**
|
||||
- Published: February 24, 2026
|
||||
- Mandatory compliance deadline: April 6, 2026 (for all digital health tools deployed in NHS)
|
||||
- Covers clinical safety AND data protection
|
||||
- UK-specific but applies to any tool used in NHS clinical workflows
|
||||
|
||||
**Sources:**
|
||||
- EU Digital Strategy official site: digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
|
||||
- Orrick EU AI Act Guide: ai-law-center.orrick.com/eu-ai-act/high-risk-ai/
|
||||
- Article 6 classification rules: artificialintelligenceact.eu/article/6/
|
||||
- Educo Life Sciences compliance guide: educolifesciences.com (primary URL above)
|
||||
- npj Digital Medicine analysis: nature.com/articles/s41746-024-01213-6
|
||||
|
||||
## Agent Notes
|
||||
|
||||
**Why this matters:** This is the most structurally important finding of Session 11. The EU AI Act creates the FIRST external regulatory mechanism that could force OE (and similar clinical AI tools) to: (a) document training data and governance, (b) disclose performance characteristics, (c) implement meaningful human oversight as a designed-in system requirement. Market forces have not produced these disclosures despite accumulating research literature documenting four failure modes. The EU AI Act compliance deadline (August 2, 2026) gives OE 5 months to come into compliance for European deployments. The NHS DTAC V2 deadline (April 6, 2026) is NOW — two weeks away.
|
||||
|
||||
**What surprised me:** The "meaningful human oversight" requirement is not defined as "physician can review AI outputs" (which is what OE's EHR integration currently provides) — it requires that human oversight be DESIGNED INTO THE SYSTEM. The Sutter Health integration's in-context automation bias (discussed in Session 10) may be structurally incompatible with "meaningful human oversight" as the EU AI Act defines it: if the EHR embedding is designed to present AI suggestions at decision points without friction, the design is optimized for the opposite of meaningful oversight.
|
||||
|
||||
**What I expected but didn't find:** No OE-specific EU AI Act compliance announcement. No disclosure of any EU market regulatory filing by OE. OE's press releases focus on US health systems (Sutter Health) and content partnerships (Wiley). If OE has EU expansion ambitions, the compliance clock is running.
|
||||
|
||||
**KB connections:**
|
||||
- Directly relevant to Belief 5 (clinical AI safety): regulatory track is the first external force that could bridge the commercial-research gap
|
||||
- Connects to Belief 3 (structural misalignment): regulatory mandate filling the gap where market incentives have failed — the attractor state for clinical AI safety may require regulatory catalysis, just as VBC requires payment model catalysis
|
||||
- The "dataset documentation" and "transparency to users" requirements directly address the OE model opacity finding from Session 11
|
||||
- Cross-domain: connects to Theseus's alignment work on AI governance and human oversight standards
|
||||
|
||||
**Extraction hints:** Primary claim: EU AI Act creates the first external regulatory mechanism requiring healthcare AI to disclose training data governance, implement meaningful human oversight, and register in a public database — effective August 2026 for European deployments. Confidence: proven (the law exists; the classification and deadline are documented). Secondary claim: the EU AI Act's "meaningful human oversight" requirement may be incompatible with EHR-embedded clinical AI that presents suggestions at decision points without friction — the design compliance question is live. Confidence: experimental (interpretation of regulatory requirements applied to a specific product design is legal inference, not settled law).
|
||||
|
||||
**Context:** This is a policy document, not a research paper. The extractable claims are about regulatory facts and structural implications. The EU AI Act is a live legislative obligation for any AI company operating in European markets — it's not a proposal or standard. The August 2026 deadline is fixed; only an exemption or amendment would change it.
|
||||
|
||||
## Curator Notes (structured handoff for extractor)
|
||||
PRIMARY CONNECTION: The claim that healthcare AI safety risks are unaddressed by market forces — the EU AI Act is the regulatory counter-mechanism
|
||||
WHY ARCHIVED: First external legal obligation requiring clinical AI transparency and human oversight design; creates a structural forcing function for what the research literature has recommended; the compliance deadline (August 2026) makes this time-sensitive
|
||||
EXTRACTION HINT: Extract the regulatory facts (high-risk classification, compliance obligations, deadline) as proven claims. Extract the "meaningful human oversight" interpretation as experimental. The NHS DTAC V2 April 2026 deadline deserves a separate mention as the UK parallel. Note the connection to OE specifically as an inference — OE hasn't announced EU market regulatory filings, but any EHR integration in a European health system would trigger Annex III.
|
||||
|
||||
|
||||
## Key Facts
|
||||
- EU AI Act (Regulation 2024/1689) entered into force February 2, 2025
|
||||
- Annex III high-risk AI obligations effective August 2, 2026 for new deployments
|
||||
- Full manufacturer obligations effective August 2, 2027 for all high-risk AI systems
|
||||
- NHS DTAC Version 2 published February 24, 2026
|
||||
- NHS DTAC Version 2 mandatory compliance deadline April 6, 2026
|
||||
- Healthcare AI classified as high-risk under EU AI Act Annex III and Article 6
|
||||
- EU AI Act requires public registration of high-risk AI systems in EU database
|
||||
- Training data must be 'well-documented, representative, and sufficient in quality' under EU AI Act
|
||||
- Meaningful human oversight must be 'designed into the system' per EU AI Act requirements
|
||||
Loading…
Reference in a new issue