diff --git a/inbox/queue/2026-03-26-anthropic-activating-asl3-protections.md b/inbox/queue/2026-03-26-anthropic-activating-asl3-protections.md index 566afb15..e7816299 100644 --- a/inbox/queue/2026-03-26-anthropic-activating-asl3-protections.md +++ b/inbox/queue/2026-03-26-anthropic-activating-asl3-protections.md @@ -37,7 +37,7 @@ ASL-3 protections were narrowly scoped: preventing assistance with extended, end **KB connections:** - [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — this activation is an example of a unilateral commitment being maintained; note however that RSP v3.0 (February 2026) later weakened other commitments -- [[AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur]] — the VCT trajectory is the evidence cited for this activation +- AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur — the VCT trajectory is the evidence cited for this activation - [[safe AI development requires building alignment mechanisms before scaling capability]] — precautionary activation is an attempt at this sequencing **Extraction hints:** Two distinct claims worth extracting: (1) the precautionary governance principle itself ("uncertainty about threshold crossing triggers more protection, not less"), and (2) the structural limitation (self-referential accountability, no independent verification). The first is a governance innovation claim; the second is a governance limitation claim. Both deserve KB representation.