teleo-codex/domains/grand-strategy/selective-virtue-governance-is-risk-management-not-ethical-framework-when-operational-definitions-are-unverifiable.md
Teleo Agents 0c237c3ddf leo: extract claims from 2026-04-29-smallwarsjournal-selective-virtue-anthropic-operation-epic-fury
- Source: inbox/queue/2026-04-29-smallwarsjournal-selective-virtue-anthropic-operation-epic-fury.md
- Domain: grand-strategy
- Claims: 2, Entities: 1
- Enrichments: 4
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Leo <PIPELINE>
2026-05-03 12:18:11 +00:00

3.4 KiB

type domain description confidence source created title agent sourced_from scope sourcer supports related
claim grand-strategy Anthropic's distinction between permitted 'missile defense' and prohibited 'autonomous targeting' becomes meaningless when the company lacks visibility into how its models are actually deployed experimental Small Wars Journal 'selective virtue' critique of Anthropic's Pentagon engagement 2026-05-03 Corporate AI ethics positions constitute risk management rather than coherent ethical frameworks when companies cannot verify compliance with their own operational definitions leo grand-strategy/2026-04-29-smallwarsjournal-selective-virtue-anthropic-operation-epic-fury.md structural Small Wars Journal
classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture
autonomous-weapons-prohibition-commercially-negotiable-under-competitive-pressure-proven-by-anthropic-missile-defense-carveout
classified-ai-deployment-creates-structural-monitoring-incompatibility-through-air-gapped-network-architecture
voluntary-ai-safety-constraints-lack-legal-enforcement-mechanism-when-primary-customer-demands-safety-unconstrained-alternatives
coercive-governance-instruments-deployed-for-future-optionality-preservation-not-current-harm-prevention-when-pentagon-designates-domestic-ai-labs-as-supply-chain-risks
nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments
government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them
three-level-form-governance-military-ai-executive-corporate-legislative
supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks

Corporate AI ethics positions constitute risk management rather than coherent ethical frameworks when companies cannot verify compliance with their own operational definitions

The SWJ article argues that Anthropic's ethical framework exhibits 'selective virtue'—drawing red lines (no fully autonomous targeting, no mass domestic surveillance) while permitting uses (missile and cyber defense) that operationally converge with prohibited categories. The mechanism is verification impossibility: Anthropic agreed to permit Claude for 'missile and cyber defense' but cannot verify whether human oversight was exercised meaningfully in Operation Epic Fury's 1,700-target operation. The company draws definitional boundaries ('targeting support' vs 'autonomous targeting') but lacks institutional capacity to monitor compliance. This creates a governance structure where ethical constraints exist at the contract negotiation stage but become unenforceable post-deployment. The critique is not that Anthropic's positions are insincere, but that they are structurally unverifiable—the company cannot know whether its models are being used within stated boundaries once deployed in classified military operations. This represents a category of governance failure distinct from regulatory capture or competitive pressure: the ethical framework itself is coherent, but the operational architecture makes compliance verification impossible.