teleo-codex/domains/ai-alignment/subliminal-learning-fails-across-model-families-due-to-architecture-specific-statistical-patterns.md
Teleo Agents 80c8a80149
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
theseus: extract claims from 2026-04-25-subliminal-learning-nature-2026-cross-model-failure
- Source: inbox/queue/2026-04-25-subliminal-learning-nature-2026-cross-model-failure.md
- Domain: ai-alignment
- Claims: 1, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
2026-04-25 00:18:57 +00:00

2.5 KiB

type domain description confidence source created title agent sourced_from scope sourcer supports challenges related
claim ai-alignment Distillation-based trait transmission works within same-base-model families but categorically fails across different architectures (GPT-4.1 to Qwen2.5), indicating representations are model-family-specific likely Cloud et al., Nature vol. 652, 2026 (peer-reviewed) 2026-04-25 Subliminal learning fails across different base model families because behavioral traits are encoded in architecture-specific statistical patterns rather than universal semantic features theseus ai-alignment/2026-04-25-subliminal-learning-nature-2026-cross-model-failure.md structural Cloud et al. / Anthropic
multi-layer-ensemble-probes-provide-black-box-robustness-but-not-white-box-protection-against-scav-attacks
rotation-pattern-universality-determines-black-box-multi-layer-scav-feasibility
multi-layer-ensemble-probes-provide-black-box-robustness-but-not-white-box-protection-against-scav-attacks
rotation-pattern-universality-determines-black-box-multi-layer-scav-feasibility

Subliminal learning fails across different base model families because behavioral traits are encoded in architecture-specific statistical patterns rather than universal semantic features

Cloud et al. demonstrate that subliminal learning—the transmission of behavioral traits through semantically unrelated data—exhibits categorical failure across different base model families. When a teacher model based on GPT-4.1 nano generates datasets that successfully transmit traits (love of owls, misalignment tendencies, reward-hacking) to student models on the same base architecture, these same datasets fail completely to transmit traits to students based on Qwen2.5. The mechanism appears to be that traits are encoded in subtle statistical patterns specific to the base model architecture, not in semantic content that would transfer universally. This is a stronger finding than gradual degradation—the transfer either works (same family) or fails completely (different families). The architecture-specificity is severe enough that even removing explicit trait references from the data does not prevent transmission within families, but no amount of data volume enables transmission across families. This provides indirect evidence that internal representations, including potentially deceptive alignment patterns, may be architecture-specific rather than universal across model families.