pipeline: archive 1 source(s) post-merge
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
6f852dbe1a
commit
68ca7db2c8
1 changed files with 64 additions and 0 deletions
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "Ada Lovelace Institute Written Evidence to Lords Science & Technology Committee NHS AI Personalised Medicine Inquiry (GAI0086)"
|
||||||
|
author: "Ada Lovelace Institute"
|
||||||
|
url: https://committees.parliament.uk/writtenevidence/113850/html/
|
||||||
|
date: 2026-03-01
|
||||||
|
domain: health
|
||||||
|
secondary_domains: [ai-alignment]
|
||||||
|
format: policy-submission
|
||||||
|
status: processed
|
||||||
|
priority: medium
|
||||||
|
tags: [Lords-inquiry, NHS-AI, clinical-AI, governance, regulatory-capture, Ada-Lovelace-Institute, safety, UK, personalised-medicine]
|
||||||
|
flagged_for_theseus: ["Clinical AI governance submission from major UK AI safety institute — may be relevant to AI alignment domain on regulatory capture patterns"]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
**Written evidence submitted by the Ada Lovelace Institute** (reference GAI0086) to the House of Lords Science and Technology Committee inquiry on "Innovation in the NHS: Personalised Medicine and AI."
|
||||||
|
|
||||||
|
**Inquiry context:**
|
||||||
|
- Launched: March 10, 2026
|
||||||
|
- Submissions deadline: April 20, 2026 (21 days from today's session)
|
||||||
|
- Committee framing: Why does the NHS struggle to ADOPT life sciences innovations? What systemic barriers prevent deployment?
|
||||||
|
- The framing is adoption-acceleration, not safety evaluation
|
||||||
|
|
||||||
|
**Ada Lovelace Institute submission framing:**
|
||||||
|
- "Welcoming the Committee's investigation of the current state of AI governance in the UK"
|
||||||
|
- Describes "a bird's eye view of the challenges at play"
|
||||||
|
- Frames the evidence around governance challenges, not just adoption barriers
|
||||||
|
- ALI's prior work includes "algorithmic impact assessment in healthcare" (separate ALI project)
|
||||||
|
|
||||||
|
**Significance:**
|
||||||
|
The Ada Lovelace Institute is the UK's leading independent research institute on AI governance and ethics. Its submission framing ("AI governance," "challenges at play") is distinct from the pure adoption-acceleration framing that dominates the inquiry brief. This is the first confirmed submission from a safety-oriented institution in the inquiry record.
|
||||||
|
|
||||||
|
**What is NOT yet known (full submission not accessible):**
|
||||||
|
- Whether the ALI submission explicitly references clinical AI failure mode literature (automation bias, de-skilling, NOHARM omission dominance)
|
||||||
|
- Whether the ALI recommends specific safety requirements or merely process improvements
|
||||||
|
- What specific governance challenges the submission identifies
|
||||||
|
|
||||||
|
**Note:** The April 20 deadline has not yet passed. More submissions are expected before the deadline.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** Session 14 documented the Lords inquiry as framed in adoption-acceleration terms — a potential sixth institutional failure mode (regulatory capture). This submission from Ada Lovelace Institute is evidence that the safety perspective IS entering the inquiry record, which complicates the "regulatory capture" framing. The claim that the Lords inquiry represents pure regulatory capture may need nuance: the framing is adoption-biased, but safety evidence is being submitted. The committee's final conclusions (expected months from now) will determine whether safety evidence was incorporated or sidelined.
|
||||||
|
|
||||||
|
**What surprised me:** The submission was filed BEFORE the April 20 deadline, suggesting ALI actively engaged with the inquiry rather than waiting for the deadline. The URL is directly accessible (committees.parliament.uk is open access), which means future sessions can read the full submission content.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Full submission text (not retrieved this session — URL is accessible but full content not scraped). The follow-up priority is to READ the full submission content after April 20 when more submissions have arrived.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]] — ALI's governance framing is likely aligned with this claim
|
||||||
|
- Session 14 claim candidate: "Regulatory capture as sixth clinical AI institutional failure mode — coordinated global pattern Q1 2026" — this submission is a partial moderator
|
||||||
|
|
||||||
|
**Extraction hints:** Do NOT extract as a standalone claim. The full submission content is needed first. Archive now so the extractor knows:
|
||||||
|
1. The submission exists and is accessible
|
||||||
|
2. The framing is governance-oriented (moderates "pure regulatory capture" claim)
|
||||||
|
3. After April 20, full submissions should be read and more definitive evidence extracted
|
||||||
|
|
||||||
|
**Context:** The Ada Lovelace Institute was founded in 2018 with Nuffield Foundation funding. It has become one of the most influential AI governance voices in the UK. It previously submitted evidence to the government's AI safety review. The fact that it has framed this submission around governance "challenges" rather than adoption barriers is consistent with its institutional mission.
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: Session 14 claim candidate on "regulatory capture as sixth institutional failure mode"
|
||||||
|
WHY ARCHIVED: First confirmed safety-oriented submission to the Lords inquiry, before April 20 deadline. Moderates the pure "regulatory capture" framing — safety evidence is entering the record.
|
||||||
|
EXTRACTION HINT: Do not extract now. Read the full submission after April 20. The key question: does the ALI submission explicitly reference the clinical AI failure mode literature (automation bias, de-skilling, NOHARM)? If yes, that's a distinct extractable claim: "institutional acknowledgment of clinical AI failure modes reached Parliament via Lords inquiry." If no, the submission is less notable.
|
||||||
Loading…
Reference in a new issue