116 containers, one laptop, zero per-project setup
Six active projects each maintaining their own Postgres, Redis, and MinIO containers in isolation meant duplicate memory usage, credential sprawl, and a persistent HTTP-in-dev vs HTTPS-in-prod gap that made OAuth redirects, secure cookies, and service workers impossible to test locally. The fix was a shared infrastructure mesh: a single docker-compose.yml (3,425 lines, 116 containers) with 19 named profiles, a Caddy wildcard TLS reverse proxy covering 97 named service routes, and dnsmasq resolving all *.dev.dharmic.cloud subdomains to localhost.
Every project gets HTTPS-secured local domains on day one. The full service catalogue covers databases, search and vectors, messaging, real-time streaming, workflow orchestration, observability (full LGTM stack), AI/LLM inference, auth (Authentik SSO), analytics, payments, billing, secrets management, CI/CD, and dev tooling — all accessible at production-quality HTTPS URLs rather than localhost:PORT. The entire repo was built in a single ~105-minute evening session: 8 commits between 21:09 and 22:51 on 2026-03-28.
The core problem was not hardware — a MacBook M1 Max with 64 GB has ample RAM for many concurrent containers. The constraint was cognitive: managing 6 separate docker-compose.yml files with mismatched credentials and port conflicts was becoming unmaintainable, and the HTTP/HTTPS dev-prod gap was causing bugs that could only reproduce in staging. The solution needed to work with all runtimes in use (Bun, Rust, React Native, Tauri) without IDE coupling.
The DNS-to-TLS-to-container flow has three layers. dnsmasq (Homebrew, symlinked to ~/infra/dnsmasq/dnsmasq.conf) resolves *.dev.dharmic.cloud to 127.0.0.1, *.co.dharmic.cloud to the OrbStack Coolify VM, and *.do.dharmic.cloud to the OrbStack Dokploy VM via three wildcard address rules — no per-service DNS entries. Caddy (Homebrew, symlinked to ~/infra/caddy/Caddyfile) obtains a single *.dev.dharmic.cloud wildcard certificate via Cloudflare DNS-01 ACME challenge and reverse-proxies all 97 named hosts to Docker container ports. Docker Compose provides the 116 container definitions grouped into 19 profiles (core, search, messaging, workflows, media, ai, monitoring, streaming, analytics, automation, docs, customer, email, auth, notifications, platform, devtools, payments, internal-tools). A ports.yml registry (254 lines) tracks every assigned port. A MIGRATE.md runbook, written as an AI-executable instruction set, automates onboarding any project from isolated Docker to the shared mesh.
DNS-01 challenge for wildcard TLS. HTTP-01 requires public internet reachability, which is impossible for localhost. DNS-01 via Cloudflare API proves domain ownership through a TXT record — one cert, all 97+ subdomains, no browser warnings, no --allow-insecure-localhost anywhere.
Docker Compose profiles as the service selector. Rather than separate compose files per project (which duplicate shared services), profiles turn service inclusion into a CLI flag. docker compose --profile prachyam-sangam up -d starts exactly the subset that project needs, keeping RAM proportional to active work.
ports.yml as machine-readable port registry. The file tracks every reserved infra port and per-project frontend/API/worker ports in one place. Claude Code reads it during migration to find the next free port and register the new project — fully automated, zero manual port hunting.
MIGRATE.md as AI-executable runbook. The file begins with a direct instruction for Claude Code to read and follow. The full migration procedure — identify shared services, reconcile credentials, restructure compose, update .env, register profiles — is written for an AI agent, not just a human reader. New project onboarding takes one ~20-minute session.
All 6 active projects share one Postgres instance, one Dragonfly cache, one MinIO with per-project buckets, and the full observability and tooling stack. HTTPS in dev is now structurally identical to HTTPS in prod — OAuth redirects, secure cookies, and service workers all work without a staging deploy. The four-tier topology (local → Coolify staging → Dokploy staging → production) uses the same docker-compose.yml across all tiers, with environment differences expressed only through .env values. The repo replaced roughly 12 categories of paid SaaS development tooling with self-hosted equivalents running locally at zero ongoing cost.
per-project isolation
local → staging → prod
to build
3-node Tailscale mesh beats RAM limits
Tauri 2 system-tray app for managing dnsmasq, Caddy, AeroSpace, and SketchyBar with nom parser-combinators and 63 tests
Did this resonate?