AI writing room for Indian television
Indian TV productions run on scattered tools: Word for scripts, Excel for schedules, WhatsApp for call sheets, and no AI that understands Hindi, Navarasa theory, or the DOOD production workflow. Western tools like Final Draft and StudioBinder cover their respective slices but speak no Hindi, know nothing of bhakti symbolism, and connect nothing between the creative and logistical layers of a production.
Sutradhaar is a full-stack AI writing room and production management suite built specifically for Indian television. It covers every pre-production phase: a professional TipTap screenplay editor with bilingual Hindi/English support and industry-standard auto-formatting; a RAG pipeline that generates scenes in the voice of legendary writers — Tulsidas, Shakespeare — by retrieving relevant passages from uploaded corpora; Navarasa-aware AI shot list generation; storyboard management; script breakdown with all 14 DOOD element categories; colour-coded stripboard scheduling; call sheet generation with PDF export; a budget tracker with variance highlighting; and a Recharts analytics dashboard with a 9-axis Navarasa radar chart.
The entire system was designed, specced, and shipped — all 25 commits — in a single six-hour session on 2026-03-18. It exists as both a portfolio anchor and a working proof-of-concept for a TV producer partner who needs exactly this system for her upcoming production.
The project required genuine domain knowledge that does not exist in generic tooling. Indian TV scripts mix Hindi and English within a single block. Shot suggestions must account for all 9 Navarasa emotional states — not generic Western cinematography heuristics. Breakdown categories follow the DOOD colour-coding system with 14 defined element types. All of this had to be encoded not as runtime string matching but as first-class schema — Postgres enums, typed columns, validated structures — so every layer of the system speaks the same language without a mapping layer.
The second constraint was time. The project had to be demo-able fast enough to serve as a live portfolio piece during an active job search, and functional enough to present to a real TV producer as a credible tool for her next production — not a mockup. A 675-line implementation plan written before any code was what made single-day execution possible.
The stack is a single Next.js 16 app with the App Router. tRPC v11 handles all CRUD with full TypeScript type safety co-located with the schema; streaming AI endpoints live in Next.js API Routes. PostgreSQL 16 with the pgvector extension is managed via Drizzle ORM. MinIO provides object storage for document uploads and storyboard frames; Redis with BullMQ handles background document processing.
The RAG pipeline lives in three focused files. A custom paragraph-boundary chunker greedy-merges
paragraphs to 2000 characters with 200-character overlap and sentence-level sub-splitting for
oversized blocks — preserving narrative context better than fixed-token chunking for literary Indian
texts where a Ramcharitmanas doha often carries its meaning at the paragraph boundary. An Ollama
embedder returns a zero-vector on failure rather than crashing; a pgvector retriever uses raw Drizzle
SQL with the <=> cosine distance operator, scoped per persona, with automatic fallback to
chronological fetch when Ollama is offline.
The 26-table schema encodes Indian production domain knowledge as Postgres enums: navarasaEnum
with all 9 Sanskrit emotional states, intExtEnum, timeOfDayEnum, and sceneElementCategoryEnum
with all 14 DOOD breakdown categories. The database speaks the language of Indian film production at
the type system level — every downstream layer gets that domain knowledge for free.
No LangChain — direct pgvector SQL. LangChain adds roughly 100MB of transitive dependencies and abstracts away exactly the control points that need custom logic: zero-vector resilience, per-persona scoping, and paragraph-boundary chunking for literary texts. Three focused files are fully transparent and directly debuggable.
Custom paragraph-boundary chunker over off-the-shelf splitting. Fixed-token chunking breaks mid-doha. The custom chunker merges at paragraph boundaries, carries overlap, and sub-splits only when a single paragraph exceeds the token budget — preserving context for the retrieval step.
generateObject with Zod schema for shot list AI. The shot list route uses Vercel AI SDK's
structured output mode so the response is structurally guaranteed — no parsing, no hallucinated shot
shapes. Navarasa heuristics are embedded in the system prompt as natural language instructions:
veera → low angle wide + tracking, karuna → close-up + slow dolly, shringara → soft focus.
Domain encoding in the schema, not in runtime logic. When navarasaEnum exists at the DB level,
the frontend renders a 9-axis Recharts radar chart without any mapping layer. When
sceneElementCategoryEnum has 14 DOOD values, the breakdown UI is correct by construction.
Spec-first single-session execution. A 675-line docs/IMPLEMENTATION_PLAN.md with exact
component names, file paths, DB table names, and feature bullet points per phase was written before
any code. Every architectural decision was resolved upfront; the coding session was pure execution.
commits in one day
table PostgreSQL schema
lines of TypeScript
Navarasa rasas encoded
Did this resonate?
Sutradhaar is a demo-able full-stack application covering every pre-production phase from screenplay through call sheet, with AI assistance grounded in legendary Indian writer corpora and bilingual Hindi/English support throughout the UI. The 26-table schema, custom RAG pipeline, and all 10 tRPC routers shipped in a single day. The "Why This?" explanation engine — which cites specific dohas and corpus passages to justify any creative choice the AI makes, with source attribution — is a novel pattern for AI-assisted creative tooling that does not exist in any Western screenplay tool. The system self-hosts on any VPS via the standalone Docker Compose file, replacing a full StudioBinder subscription at zero marginal AI cost using local Ollama inference on an M-series machine.