What 'vibe coding' actually means in practice — my AI toolkit, when it helps, when it hurts, and the meta angle of building this portfolio with Claude.
"Vibe coding" gets thrown around a lot, usually by people who mean "I paste stuff into ChatGPT." That's not what I'm talking about.
Vibe coding, the way I practice it, is a flow state where AI is embedded deeply enough in your workflow that the boundary between thinking and implementing dissolves. You're not context-switching between "figure out the approach" and "write the code." You're doing both simultaneously, with an AI that has your codebase loaded and can execute on partial instructions.
It's pair programming where your partner never gets tired, never judges your half-formed ideas, and types at the speed of inference.
Here's the actual stack I use daily:
Claude Code is my main development environment for anything non-trivial. Terminal-based, full codebase awareness, tool use (file reads, edits, shell commands). When I say "refactor this module to use the repository pattern," it reads the existing code, understands the dependencies, makes the changes across files, and runs the tests.
The key insight: Claude Code isn't autocomplete. It's an agent that can reason about architecture. The difference matters.
At Prachyam, I built 23 Model Context Protocol servers that give AI assistants access to specific capabilities:
The meta point: building these MCP servers was the proof of AI engineering skills. The tools I built to accelerate my own work are themselves portfolio pieces.
Editor: NeoVim (with Copilot for inline suggestions)
Agent: Claude Code (for complex multi-file tasks)
Search: Qdrant-backed semantic search via MCP
Terminal: Kitty + Tmux (multiplexed sessions)
Shortcuts: Custom keybinds for common AI operationsBoilerplate and patterns. Setting up a new API route, creating a React component with the right hooks, writing Tailwind classes for a layout I can describe but don't want to type — AI handles this instantly and correctly 95% of the time.
Karanveer Singh Shaktawat
Full Stack Engineer & Infrastructure Architect
Building portfolio, contributing to open source, and seeking remote full-time roles with significant technical ownership.
Pick what you want to hear about — I'll only email when it's worth it.
Did this resonate?
Tool calls, visitor memory, real-time analytics, an LLM eval harness, and full-duplex voice — wired into a Next.js portfolio over six sessions. The decisions, the wrong turns, and the bugs that took longer to find than to fix.
Building custom Model Context Protocol servers for semantic code search, automated workflows, and integrating AI into daily development.
Cross-file refactors. Renaming a concept across 20 files, migrating from one pattern to another, updating imports after a restructure. This is where AI agents shine — they understand the graph of dependencies and can apply changes consistently.
Learning new libraries. Instead of reading documentation page by page, I describe what I want to build and let AI write the initial implementation. Then I read that code and learn the library through a working example tailored to my use case.
The "I know what I want but can't articulate the code" moment. Sometimes you have a clear mental model but the translation to syntax is friction. AI bridges that gap. "Make the sidebar collapse on mobile with a slide animation and persist the state in localStorage" — done in seconds.
Subtle architectural mistakes. AI will happily build you a solution that works today and creates tech debt tomorrow. It doesn't know your team's conventions, your scaling requirements, or the tradeoffs you've already considered and rejected. I've had Claude generate perfectly working code that violated an architectural decision I'd made for good reasons.
False confidence. AI-generated code passes linting, often passes tests, and looks professional. This makes it easy to merge without the deep scrutiny you'd give your own code. I've shipped AI-generated code that had edge cases I would have caught if I'd written it by hand — because writing it forces you to think through every branch.
The Copilot reflex. After months of heavy AI use, I noticed something uncomfortable: I was reaching for AI assistance before I'd even attempted the problem myself. Tab-completing solutions I could have written in the same time. The muscle that turns "I need to figure this out" into "let me think for five minutes" was atrophying.
I've since adopted a rule: if I can solve it in under 10 minutes, I solve it myself. AI is for leverage on hard problems, not a crutch for easy ones.
This portfolio was built with heavy AI assistance. Every phase — from the design system to the 3D timeline to the terminal easter egg — involved Claude Code as a co-pilot. Here's what that actually looked like:
What AI did well: Generating Tailwind component structures, writing animation variants, setting up MDX pipelines, creating the initial data schemas, handling responsive breakpoints. Probably 70% of the raw code output came from AI.
What I did: Architecture decisions (microservices vs monolith, which animation library, how to structure the data layer), design direction (the "Digital Kshatriya" aesthetic, cultural motifs, color palette), content (every word of the case studies, blog posts, and bio), and quality control (reviewing every line of AI output, catching subtle bugs, ensuring consistency).
The ratio that matters: AI wrote most of the code, but I made all the decisions. The difference between "AI built this portfolio" and "I built this portfolio with AI" is the decision-making layer. The code is the easy part. Knowing what to build and why — that's the engineering.
Does AI-assisted coding make you a better or worse engineer?
My honest answer: both, simultaneously. Better because you ship faster, explore more ideas, and tackle problems you wouldn't have attempted alone. Worse because the struggle of implementation is where deep understanding lives, and skipping that struggle means shallower knowledge.
The engineers who will thrive are the ones who use AI to amplify their existing skills, not replace the learning process. If you can't build it without AI, you can't debug it when AI gets it wrong. And AI will get it wrong — in production, at 2am, on the one edge case it didn't consider.
My approach: use AI aggressively for known patterns, use it cautiously for novel architecture, and never use it as a substitute for understanding.
If you want to set up a similar workflow:
The best AI-assisted developers aren't the ones who use AI the most. They're the ones who know exactly when to use it and when to put it down.