Offline generative-UI tool with Ollama streaming, sandboxed iframe preview, and a QLoRA fine-tuning pipeline
TinkerUI lets you describe a UI component in natural language and see it rendered in a live sandboxed iframe as tokens stream from a local Ollama model — no cloud API, no rate limits. A streaming-aware HTML extractor handles four distinct partial-output shapes so the preview updates token-by-token without broken renders. The companion Python pipeline fine-tunes Qwen 2.5 Coder 1.5B with QLoRA on a 340-example fuzzy-deduplicated dataset and exports to GGUF for Ollama.
Vite 7 with React 19 and Tailwind CSS v4 builds the frontend. Ollama's streaming NDJSON API delivers token-by-token generation. The srcdoc iframe with Tailwind CDN isolates untrusted HTML from the app DOM. Tauri v2 wraps the app as a native desktop binary. The Python training stack uses Unsloth, PEFT, and TRL with MPS-compatible float32 training for Apple Silicon.
Tauri 2 system-tray app for managing dnsmasq, Caddy, AeroSpace, and SketchyBar with nom parser-combinators and 63 tests
Tauri 2 system-tray Docker manager with force-directed canvas graph, NSVisualEffectView vibrancy, and 19 commits
Did this resonate?