Vibe coding does not mean throwing a prompt at an AI and hoping the output is shippable. The term — coined by Andrej Karpathy in February 2025 — originally described "fully giving in to the vibes" and forgetting the code exists. That works for throwaway prototypes. It does not work for production software. The version of vibe coding that actually ships products looks nothing like the viral demos: it is disciplined engineering where the human makes every decision and the AI executes at speed.

We build software this way every day. And the gap between "prompt and pray" and what we actually do is enormous — not in the tools used, but in the judgment applied before a single line of code gets written.

Why Most Vibe Coding Sessions Produce Garbage

The default vibe coding workflow looks like this: describe what you want in one big prompt, let the AI generate a wall of code, run it, hit errors, paste the errors back, repeat. It is a slot machine. Sometimes you get lucky. Usually you get a tangled mess that sort of works in the demo but falls apart under any real constraint.

The problem is not the AI. The problem is the absence of decisions. When you skip the design phase and jump straight to generation, you are outsourcing judgment to a model that has none. The AI does not know your brand guidelines. It does not know which codec settings will keep your page load under two seconds. It does not know that your favicon needs to be legible on both Chrome's light tab bar and Safari's dark one. It just generates the most statistically average answer to your prompt.

That is why the output looks generic — because it is.

What Disciplined AI-Assisted Development Actually Looks Like

We recently built out the complete brand asset pipeline for a SaaS marketing site — a tool that generates cinematic short-form video content from music. Before the AI wrote a single line of code or generated a single asset, we spent time making deliberate decisions about every detail.

The session started with reading and internalizing a brand brief: color palette, typography system, personality descriptors, visual principles. Not skimming — actually mapping those constraints into the decisions that would follow. The AI had context because we gave it context. Every generation prompt that followed was scoped by those constraints.

This is the part that "prompt and pray" practitioners skip entirely. They describe the end state. We describe the constraints that make the end state correct.

Why Foreground Color Choices Reveal Engineering Depth

Here is a decision that no AI would make on its own: the SVG logo needed different foreground treatments depending on where it would render. On the marketing site — a dark background — the logo needed to be white or light. But the same SVG gets used as a browser favicon, where it sits on a tab bar that could be white, gray, or dark depending on the OS theme and browser.

We chose #C87941 — a warm copper accent from the brand palette — specifically for the favicon. Why? Because it has enough contrast to remain legible on both light and dark browser chrome, without requiring separate light-mode and dark-mode favicon files. One asset, universal legibility. That is a design engineering decision, not a prompt.

The AI generated the favicon files. But the decision about which color to use, and why, came from a human who understood the deployment context.

How We Compressed 125MB of Video to 6.8MB Without Losing Audio

The marketing site needed showcase videos — short cinematic clips demonstrating what the product generates. The raw source files totaled 125MB. For a landing page, that is a non-starter. We needed those videos to load fast on mobile without sacrificing the cinematic quality that is the entire selling point.

We chose H.264 with a CRF of 28, capped the bitrate, stripped unnecessary metadata, and — critically — preserved the audio tracks because the product is about music-driven video. The result: 6.8MB total across six videos. A 94.5% reduction in file size with no perceptible quality loss on web-sized players.

A naive vibe coding prompt like "compress these videos for web" would have produced wildly different results. Maybe it strips the audio (a common default). Maybe it re-encodes at a bitrate that blows up on mobile. Maybe it picks a codec with poor browser support. Every one of those failures traces back to the same root cause: no human specified the constraints that matter.

Why We Generated Three OG Images and Visually Reviewed Each One

Open Graph images are the first impression your site makes on social media, in Slack previews, and in search results. We generated three variants with different compositions and visual emphasis, then reviewed each one at actual render size before selecting the winner.

This is the "taste" part of vibe coding that rarely gets discussed. The AI can generate unlimited options. The human has to know which one is right — and "right" means it works at 1200x630 pixels, communicates the brand personality in a thumbnail, and does not get cropped awkwardly by LinkedIn's preview card. Those are not things you can specify in a prompt. You have to look.

What a Complete Favicon Set Actually Requires

Most developers know they need a favicon.ico. Fewer realize that a production-grade favicon implementation includes separate assets for Apple touch icons (180x180, with platform-specific background padding), Android PWA icons at two resolutions (192x192 and 512x512), a high-resolution PNG fallback (512x512), and the classic .ico file. Each platform has different expectations for background color, padding, and transparency.

We generated bitmap-traced SVGs with proper tight viewBoxes — not the default potrace output that leaves excessive whitespace around the artwork. Then we produced every platform-specific variant from those clean source files, with background colors chosen per platform: transparent for web SVGs, the brand's dark background for Apple touch icons where iOS renders them on a home screen, the accent copper for the .ico that lives in browser tabs.

This is the kind of asset pipeline that takes a human team days of back-and-forth with a designer. We completed it in a single session — but only because every decision was made deliberately before the generation step.

The Real Skill in Vibe Coding Is Not Prompting

The discourse around vibe coding focuses almost entirely on prompting technique. Write better prompts, get better output. That is true at the margins, but it misses the actual leverage point: the quality of vibe coding output is determined by the quality of decisions made before the prompt is written.

What codec settings preserve audio while hitting a target file size? Which accent color maintains contrast across both light and dark browser chrome? Should the SVG viewBox be tight-cropped or padded? These are engineering decisions that require domain knowledge, visual judgment, and an understanding of deployment contexts. No prompt template gives you that. No AI model has that taste.

We have written about how our specialized AI sub-agent architecture powers this kind of disciplined workflow — each agent scoped to a specific domain, each operating under human-defined constraints rather than freeform generation. The architecture matters because it encodes the discipline that "prompt and pray" lacks.

Vibe Coding Works When the Human Is the Architect

The session that produced a complete SaaS brand asset pipeline — logos, favicons, compressed videos, OG images, platform-specific icon sets — took a fraction of the time it would have taken with traditional tools and workflows. That is the real promise of AI-accelerated engineering. Not that the AI replaces judgment, but that it executes at machine speed once the judgment is applied.

Vibe coding done right is not about vibes at all. It is about taste, constraints, and deliberate decisions — with AI as the hands, not the brain. The engineers who treat it that way are shipping production-grade work at speeds that look impossible from the outside. The ones who treat it like a slot machine are generating Medium posts about why AI-generated code is always broken.

The difference was never the tools. It was always the engineering.