extract: 2026-02-01-seedance-2-ai-video-benchmark
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
This commit is contained in:
parent
ca9d08c42c
commit
0288c117fc
5 changed files with 56 additions and 1 deletions
|
|
@ -39,6 +39,12 @@ The 60%→26% collapse in consumer enthusiasm for AI-generated creator content b
|
|||
|
||||
The binding constraint is specifically a moral disgust response in emotionally meaningful contexts, not just general acceptance issues. Journal of Business Research found that AI authorship triggers moral disgust even when content is identical to human-written versions. This suggests the gate is values-based rejection, not quality assessment.
|
||||
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: [[2026-02-01-seedance-2-ai-video-benchmark]] | Added: 2026-03-16*
|
||||
|
||||
Sora standalone app achieved 12 million downloads but retention below 8% at day 30 (vs 30%+ benchmark for successful apps), demonstrating that even among early adopters who actively sought AI video tools, usage hasn't created a compelling habit. This empirically confirms that capability has outpaced demand-side acceptance.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -25,6 +25,12 @@ This is more dangerous for incumbents than simple cost competition because they
|
|||
|
||||
The 2026 emergence of 'human-made' as a premium market label provides concrete evidence that quality definition now explicitly includes provenance and human creation as consumer-valued attributes distinct from production value. WordStream reports that 'the human-made label will be a selling point that content marketers use to signal the quality of their creation.' EY notes consumers want 'human-led storytelling, emotional connection, and credible reporting,' indicating quality now encompasses verifiable human authorship. PrismHaus reports brands using 'Human-Made' labels see higher conversion rates, demonstrating consumer preference reveals this new quality dimension through revealed preference (higher engagement/purchase). This extends the original claim by showing that quality definition has shifted to include verifiable human provenance as a distinct dimension orthogonal to traditional production metrics (cinematography, sound design, editing, etc.).
|
||||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2026-02-01-seedance-2-ai-video-benchmark]] | Added: 2026-03-16*
|
||||
|
||||
The 2026 benchmark shows AI video quality (hand anatomy, lip-sync) has crossed the threshold where technical tells are no longer visible, yet consumer adoption remains low (Sora <8% D30 retention). This suggests that once quality becomes indistinguishable, the preference signal shifts to factors other than production value — likely authenticity, provenance, or use case fit rather than visual fidelity.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -29,6 +29,12 @@ A concrete early signal: a 9-person team reportedly produced an animated film fo
|
|||
|
||||
McKinsey projects $10B of US original content spend (approximately 20% of total) will be addressable by AI by 2030, with single-digit productivity improvements already visible in some use cases. However, AI-generated output is not yet at quality level to drive meaningful disruption in premium production.
|
||||
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: [[2026-02-01-seedance-2-ai-video-benchmark]] | Added: 2026-03-16*
|
||||
|
||||
Seedance 2.0 benchmark data from 2026 shows near-perfect hand anatomy scores (complex finger movements with zero visible hallucinations), native 2K resolution, and 4-15 second dynamic duration. Hand anatomy was the most visible quality barrier in 2024; crossing this threshold with phoneme-level lip-sync across 8+ languages indicates AI video has reached the technical capability for live-action substitution in many production contexts.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,26 @@
|
|||
{
|
||||
"rejected_claims": [
|
||||
{
|
||||
"filename": "ai-video-generation-adoption-is-demand-constrained-not-capability-constrained-as-evidenced-by-low-retention-despite-quality-threshold-crossing.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
}
|
||||
],
|
||||
"validation_stats": {
|
||||
"total": 1,
|
||||
"kept": 0,
|
||||
"fixed": 3,
|
||||
"rejected": 1,
|
||||
"fixes_applied": [
|
||||
"ai-video-generation-adoption-is-demand-constrained-not-capability-constrained-as-evidenced-by-low-retention-despite-quality-threshold-crossing.md:set_created:2026-03-16",
|
||||
"ai-video-generation-adoption-is-demand-constrained-not-capability-constrained-as-evidenced-by-low-retention-despite-quality-threshold-crossing.md:stripped_wiki_link:GenAI adoption in entertainment will be gated by consumer ac",
|
||||
"ai-video-generation-adoption-is-demand-constrained-not-capability-constrained-as-evidenced-by-low-retention-despite-quality-threshold-crossing.md:stripped_wiki_link:consumer definition of quality is fluid and revealed through"
|
||||
],
|
||||
"rejections": [
|
||||
"ai-video-generation-adoption-is-demand-constrained-not-capability-constrained-as-evidenced-by-low-retention-despite-quality-threshold-crossing.md:missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
"model": "anthropic/claude-sonnet-4.5",
|
||||
"date": "2026-03-16"
|
||||
}
|
||||
|
|
@ -7,9 +7,13 @@ date: 2026-02-01
|
|||
domain: entertainment
|
||||
secondary_domains: []
|
||||
format: report
|
||||
status: unprocessed
|
||||
status: enrichment
|
||||
priority: medium
|
||||
tags: [ai-video-generation, seedance, production-costs, quality-threshold, capability]
|
||||
processed_by: clay
|
||||
processed_date: 2026-03-16
|
||||
enrichments_applied: ["non-ATL production costs will converge with the cost of compute as AI replaces labor across the production chain.md", "GenAI adoption in entertainment will be gated by consumer acceptance not technology capability.md", "consumer definition of quality is fluid and revealed through preference not fixed by production value.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
@ -59,3 +63,10 @@ Aggregated benchmark data on the leading AI video generation models in 2026 (See
|
|||
PRIMARY CONNECTION: `non-ATL production costs will converge with the cost of compute as AI replaces labor across the production chain`
|
||||
WHY ARCHIVED: The hand anatomy benchmark crossing signals that the quality threshold for realistic video has been substantially cleared — which shifts the remaining barrier to consumer acceptance (demand-side) and creative direction (human judgment), not raw capability.
|
||||
EXTRACTION HINT: The Sora retention data (supply without demand) is the most extractable insight. A claim about AI video tool adoption being demand-constrained despite supply capability would be new to the KB.
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Seedance 2.0 technical specs: 2048x1080 landscape / 1080x2048 portrait native resolution, 4-15 second dynamic duration, 30% faster than 1.5 Pro
|
||||
- Benchmark methodology: 50+ generations per model, identical 15-category prompt set, 4 seconds at 720p/24fps, rated 0-10 on 6 dimensions by 2 independent reviewers
|
||||
- Kling 3.0 rated best for ease of use in straightforward video generation
|
||||
- Seedance 2.0 rated best for precise creative control
|
||||
|
|
|
|||
Loading…
Reference in a new issue