| claim |
ai-alignment |
Anthropic's refusal cited model unreliability for autonomous weapons as a contractual constraint, operationalizing B4 verification degradation as a deployment boundary |
experimental |
Anthropic DoD statement, February 2026 |
2026-05-11 |
AI verification limits are invoked as corporate safety arguments in government contract disputes rather than just technical research findings |
theseus |
ai-alignment/2026-02-14-anthropic-statement-dod-refusal-any-lawful-use.md |
functional |
@AnthropicAI |
| ai-capability-and-reliability-are-independent-dimensions-because-claude-solved-a-30-year-open-mathematical-problem-while-simultaneously-degrading-at-basic-program-execution-during-the-same-session |
|
| ai-capability-and-reliability-are-independent-dimensions-because-claude-solved-a-30-year-open-mathematical-problem-while-simultaneously-degrading-at-basic-program-execution-during-the-same-session |
| verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit |
| selective-virtue-governance-is-risk-management-not-ethical-framework-when-operational-definitions-are-unverifiable |
| ai-company-ethical-restrictions-are-contractually-penetrable-through-multi-tier-deployment-chains |
| multilateral-verification-mechanisms-can-substitute-for-failed-voluntary-commitments-when-binding-enforcement-replaces-unilateral-sacrifice |
| ai-assisted-targeting-satisfies-autonomous-weapons-red-lines-through-action-type-definition |
|