# Seedance **Type:** AI video generation model **Developer:** ByteDance **Domain:** Entertainment / AI production tools **Status:** Active (deployed) ## Overview Seedance is ByteDance's AI video generation model designed for narrative content production. Version 2.0, released February 2026, represents a breakthrough in character consistency and temporal coherence for AI-generated video. ## Key Capabilities (v2.0, February 2026) - **Character consistency across camera angles**: Maintains exact physical traits from any camera angle across shots, solving the "AI morphing" problem - **90-second video clips** with native audio synchronization and cross-scene continuity - **Phoneme-level lip-sync** across 8+ languages - **4K resolution** output - Outperforms competitors (including Sora) specifically on character consistency ## Technical Limitations - Micro-expressions and performance nuance cannot yet replicate human actor movements - Long-form coherence limited to 90-second clips; feature-length narrative requires human direction and stitching - Fine-grained creative direction beyond prompts remains limited ## Deployment Seedance 2.0 capabilities deployed via Wan 2.7 on Mootion platform (April 15, 2026), making character-consistent AI video production accessible to independent creators. ## Timeline - **2026-02** — Seedance 2.0 released by ByteDance with character consistency breakthrough - **2026-04-15** — Wan 2.7 deployed on Mootion platform, making Seedance 2.0 capabilities accessible to creators