India-first OTT platform, 9 native targets
Prachyam Sangam is a ground-up OTT streaming platform built for the Indian market — SVOD and AVOD content, live TV with EPG, creator channels, watch parties, and a full admin panel, all from a single TypeScript monorepo. The platform targets nine distinct surfaces: web (Next.js), iOS and Android (Expo 54), Apple TV and Android TV (Expo TV), Samsung Tizen, LG webOS, a Tauri admin desktop, and a Fumadocs developer-docs app — sharing 21 packages of business logic across every target.
The project replaced a legacy OTT carrying 18 developer-years of accumulated debt, compromised SSH keys, and a CMS that blocked new features. Three evaluated ThemeForest OTT products all under-delivered on TV support and admin completeness, so the decision was made to build from scratch. What started as an employer initiative transitioned into an independent engagement with a TV-producer partner, and reached MVP across 104 numbered sprints over approximately five months of solo development.
The result is a Coolify-compatible, self-hostable stack — PostgreSQL, MinIO, Dragonfly, Typesense, Qdrant, Soketi, BullMQ, Temporal, Prometheus, and Varnish — deployable to a single VPS without a DevOps hire. Zero TypeScript errors have been enforced from sprint 75 onward, and 17 of 20 viewer flows and 8 of 14 admin flows are demo-clean.
The core constraint was hardware: a 16 GB / 512 GB M1 iMac as the sole development machine — insufficient to run seven Docker-backed apps concurrently alongside the monorepo build. Rather than cut scope or rent a beefier machine, the problem was solved by meshing three machines via a Tailscale network, offloading Docker services to two peer nodes while the iMac handled active development. Every service runs at a stable *.willmakeitsoon.com HTTPS subdomain via dnsmasq and Caddy, production-mimicking the target VPS without production costs.
TV platform fragmentation added significant complexity on top. Tizen 4.x, LG webOS 4.x, Apple TV, and Android TV each have distinct input models, video API surfaces, and debugging toolchains. Expo TV abstracts the React Native TV layer cleanly for Apple TV and Android TV, but the Smart TV web apps needed fully separate vanilla-JS builds — one using the webOSTVjs 1.2.10 SDK, the other the Tizen CLI toolchain — each with its own D-pad focus engine and HLS player integration.
The monorepo is managed by Nx 22.6.1 with Bun workspaces and contains seven apps: web (Next.js 16, main viewer plus admin route group), api (Elysia on Bun — REST API with auto-generated OpenAPI via @elysiajs/swagger), mobile (Expo 54 + React Navigation), tv (Expo TV, Tizen, webOS), desktop and admin (Tauri 2), and docs (Fumadocs). All apps draw from 21 shared packages covering auth (Better Auth), database (Drizzle + PostgreSQL), cache (Dragonfly), search (Typesense + Qdrant), storage (MinIO), payments (Razorpay), messaging (NATS + Soketi), jobs (BullMQ + Temporal), streaming (HLS transcode workers), AI, i18n, and UI component sets for both web and native surfaces.
The infrastructure layer runs entirely on Docker Compose, profiled per project. Varnish sits in front of MinIO and stream routes as an HTTP cache layer. Prometheus, Grafana, and Alertmanager provide observability. Flipt serves feature flags, togglable from both the admin panel and a dedicated MCP server. Semantic search combines Typesense for instant keyword facets with Qdrant vector embeddings generated locally via @xenova/transformers. Ten custom MCP servers bridge Claude Code to live Drizzle Studio, Prometheus metrics, Flipt, transcode job state, and a pattern-learning context store — giving every development session full project context in under 60 seconds.
Nx over Turborepo. Nx's project-graph caching means only packages touched by a given change rebuild — critical when 21 packages would otherwise cascade. Turborepo was tested first but Nx's executor model mapped more cleanly to the Tauri build targets and cross-platform run-many scenarios.
Elysia over Express/Fastify. Bun runtime throughput, plus Elysia's validator plugin generates OpenAPI schema from route definitions automatically. This feeds the Fumadocs swagger-docs app and reduces documentation drift to zero without a separate schema-maintenance step.
Dragonfly over Redis. Redis-protocol compatible, so zero application code changed. Dragonfly's multi-threaded engine uses approximately 30% less memory at equivalent cache hit rates — material on a budget VPS with limited RAM.
10 custom MCP servers instead of context-switching. DB state, transcode jobs, Prometheus metrics, and feature flags all surface in the Claude Code conversation window rather than requiring browser or terminal switches. The ott-context-mcp server persists learned fix patterns per package across sessions, maintaining institutional knowledge across 104 commits without a separate knowledge-base tool.
Sprint-numbered commits with priority tags. Every non-trivial change is tagged with a sprint number and a p0–p3 priority level in the commit subject. The DEMO-PLAN.md route-readiness audit was written in under an hour because every route's last relevant sprint was instantly traceable in git log --oneline.
native platform targets
shared TypeScript packages
sprint-tagged commits
lines of code
Did this resonate?
Prachyam Sangam reached MVP across nine distinct platforms in approximately five months of solo development — 152 commits, 104 sprints, ~972,000 lines of TypeScript, TSX, and JS, with 21 shared packages and 10 custom MCP servers. The Tailscale mesh eliminated any hardware upgrade requirement during development. Dragonfly replaced Redis with a ~30% memory reduction at no code cost. The standalone docker-compose.standalone.yml makes the full stack deployable to a single VPS without a DevOps hire, and the monorepo's architecture positions it as a sellable ThemeForest starter kit once the partnership engagement wraps.