72 Hours to Full Pipeline: Building the AniKuku AI Animation Engine
Recently, we conducted a 72-hour intensive sprint with one goal: to transform AniKuku from a "demo-ready" prototype into a "production-ready" engine capable of running full episodes. This article reviews how we connected script parsing, asset generation, timeline editing, and automated rendering into a reusable pipeline—and the product decisions that were validated along the way.
Why the 72-Hour Stress Test?
- Proving End-to-End Viability: Brands and studios are asking: "Can we produce a consistent 2-min episode quickly?" We needed to prove that the entire workflow—not just isolated demos—is feasible.
- Validating our Tech Stack: We needed to ensure that our stack—Next.js 15, React 19, Drizzle/Postgres, OpenAI-compatible LLMs, and grsai (Nano Banana / Sora2) generation links—could work together seamlessly under pressure.
- Aligning the Team: By focusing on a single creation path, we unified product, design, engineering, and operations around a practical, shared workflow.
What We Accomplished in 72 Hours
- Script-to-Shot Analysis: The LLM now parses raw scripts into a detailed shot list, tagging characters, scenes, and props. These shots are automatically placed onto the timeline as a "to-do" list.
- Consistent Asset Management: Character and scene definitions are stored in Cloudflare R2. Every generated prompt, version number, and preview image is traceable, ensuring that switching styles doesn't mean losing existing progress.
- Automated Visuals & Voiceovers: Using grsai's Nano Banana model, we can batch-generate storyboards with customizable styles. Voiceovers are generated via Text-to-Speech (TTS) and automatically aligned with the shot duration.