teleo-codex/inbox/queue/2026-03-26-aisle-openssl-zero-days.md
2026-03-26 00:16:29 +00:00

5.5 KiB

type title author url date domain secondary_domains format status priority tags
source AISLE Autonomously Discovers All 12 Vulnerabilities in January 2026 OpenSSL Release Including 30-Year-Old Bug AISLE Research https://aisle.com/blog/aisle-discovered-12-out-of-12-openssl-vulnerabilities 2026-01-27 ai-alignment
blog unprocessed high
cyber-capability
autonomous-vulnerability-discovery
zero-day
OpenSSL
AISLE
real-world-capability
benchmark-gap
governance-lag

Content

AISLE (AI-native cyber reasoning system) autonomously discovered all 12 new CVEs in the January 2026 OpenSSL release. Coordinated disclosure on January 27, 2026.

What AISLE is: Autonomous security analysis system handling full loop: scanning, analysis, triage, exploit construction, patch generation, patch verification. Humans choose targets and provide high-level supervision; vulnerability discovery is fully autonomous.

What they found:

  • 12 new CVEs in OpenSSL — one of the most audited codebases on the internet (used by 95%+ of IT organizations globally)
  • CVE-2025-15467: HIGH severity, stack buffer overflow in CMS AuthEnvelopedData parsing, potential remote code execution
  • CVE-2025-11187: Missing PBMAC1 validation in PKCS#12
  • 10 additional LOW severity CVEs: QUIC protocol, post-quantum signature handling, TLS compression, cryptographic operations
  • CVE-2026-22796: Inherited from SSLeay (Eric Young's original SSL library from the 1990s) — a bug that survived 30+ years of continuous human expert review

AISLE directly proposed patches incorporated into 5 of the 12 official fixes. OpenSSL Foundation CTO Tomas Mraz noted the "high quality" of AISLE's reports.

Combined with 2025 disclosures, AISLE discovered 15+ CVEs in OpenSSL over the 2025-2026 period.

Secondary source — Schneier on Security: "We're entering a new era where AI finds security vulnerabilities faster than humans can patch them." Schneier characterizes this as "the arms race getting much, much faster."

Agent Notes

Why this matters: OpenSSL is the most audited open-source codebase in security — thousands of expert human eyes over 30+ years. Finding a 30-year-old bug that human review missed, and doing so autonomously, is a strong signal that AI autonomous capability in the cyber domain is running significantly ahead of what governance frameworks track. METR's January 2026 evaluation put GPT-5's 50% time horizon at 2h17m — far below catastrophic risk thresholds. This finding happened in the same month.

What surprised me: The CVE-2026-22796 finding — a 30-year-old bug. This isn't a capability benchmark; it's operational evidence that AI can find what human review has systematically missed. The fact that AISLE's patches were accepted into the official codebase (5 of 12) is verification that the work was high quality, not just automated noise.

What I expected but didn't find: Any framing in terms of AI safety governance. The AISLE blog post and coverage treats this as a cybersecurity success story. The governance implications — that autonomous zero-day discovery capability is now a deployed product while governance frameworks haven't incorporated this threat/capability level — aren't discussed.

KB connections:

Extraction hints: "AI autonomous vulnerability discovery has surpassed the 30-year cumulative human expert review in the world's most audited codebases" is a strong factual claim candidate. The governance implication — that formal AI safety threshold frameworks had not classified this capability level as reaching dangerous autonomy thresholds despite its operational deployment — is a distinct claim worth extracting separately.

Context: AISLE is a commercial cybersecurity company. Their disclosure was coordinated with OpenSSL Foundation (standard responsible disclosure process), suggesting the discovery was legitimate and the system isn't being used offensively. The defensive framing is important — autonomous zero-day discovery is the same capability whether used offensively or defensively.

Curator Notes

PRIMARY CONNECTION: AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk WHY ARCHIVED: Real-world evidence that autonomous dangerous capability (zero-day discovery in maximally-audited codebase) is deployed at scale while formal governance frameworks evaluate current frontier models as below catastrophic capability thresholds — the clearest instance of governance-deployment gap EXTRACTION HINT: The 30-year-old bug finding is the narrative hook but the substantive claim is about governance miscalibration: operational autonomous offensive capability is present and deployed while governance frameworks classify current models as far below concerning thresholds