|
1 | | -# The "Vibe Coding" Mini-Guide: Building with AI |
| 1 | +# The Vibe Coding Masterclass |
2 | 2 |
|
3 | | -Welcome to the AI Era. Writing code line-by-line is rapidly becoming a legacy skill. Today, the most effective developers are **Architects and Orchestrators**. They focus on product, user flow, and system design, while LLMs handle the syntactic implementation. |
| 3 | +> **The Complete Framework for Building Production Apps with AI** |
| 4 | +> |
| 5 | +> This guide is broken into 3 phases. Phase 1 (free) teaches the foundation. |
| 6 | +> Phases 2-3 are included in the Complete Vault. |
4 | 7 |
|
5 | | -This is often called "vibe coding." But to do it in production, you need structure. Without it, your codebase quickly turns into an unmaintainable, insecure ball of mud. |
| 8 | +--- |
6 | 9 |
|
7 | | -This document outlines the foundation of working with AI coding assistants (like Cursor) effectively. |
| 10 | +## Phase 1: The Foundation |
8 | 11 |
|
9 | | -## 1. The Core Mindset Shift |
| 12 | +### The Core Mindset Shift |
10 | 13 |
|
11 | | -**Stop asking the AI to "build a feature."** |
12 | | -LLMs are excellent at writing isolated functions, but they lose context over large architectures. |
| 14 | +Stop asking the AI to "build a feature." |
13 | 15 |
|
14 | | -**Start asking the AI to "plan, verify, and implement."** |
15 | | -You must command the AI to act systematically. Use the 3-Step Agentic Loop: |
| 16 | +LLMs excel at writing isolated functions with perfect syntax. But they lose context across large architectures. When you prompt "build me a dashboard with auth," you're asking the model to simultaneously solve authentication, database access, component architecture, error handling, and UI — across 8+ files. That's where hallucinations happen. |
16 | 17 |
|
17 | | -### Step 1: PLAN |
18 | | -Before writing any code, prompt the AI: |
19 | | -> *"Analyze the current project structure and plan out exactly how we should implement [Feature]. Detail which files will be created or modified, what Next.js components will be used (Server vs Client), and how the database schema will change. Do NOT write the implementation code yet."* |
| 18 | +**Start asking the AI to "plan, then implement in batches, then self-audit."** |
20 | 19 |
|
21 | | -### Step 2: VERIFY |
22 | | -Review the AI's plan. |
23 | | -- Is it trying to use a deprecated package? |
24 | | -- Did it specify `'use client'` where a Server Component would be better? |
25 | | -- This is where YOU inject your architectural expertise. Correct the plan before it generates junk code. |
| 20 | +### The 3-Stage Agentic Loop |
26 | 21 |
|
27 | | -### Step 3: IMPLEMENT (in batches) |
28 | | -Once the plan is solid, tell it to proceed: |
29 | | -> *"Implement the plan for the database migration and the Server Actions first. Stop when you're done so I can review before moving to the UI."* |
| 22 | +Every feature you build should follow this exact sequence: |
30 | 23 |
|
31 | | -## 2. Why Cursor Rules (`.mdc` files) Matter |
| 24 | +#### Stage 1: PLAN (The Architect Prompt) |
32 | 25 |
|
33 | | -AI models are trained on internet data, which means they are trained on tutorials from 3 years ago. If you ask an LLM to build a Next.js page, it will almost certainly spit out Next.js 13/14 code. It might use the deprecated pages router, synchronous `params`, or older Supabase `auth-helpers`. |
| 26 | +Before the AI writes a single line of code, force it to think architecturally: |
34 | 27 |
|
35 | | -The `.cursor/rules/` directory in this boilerplate acts as **anti-hallucination armor**. |
| 28 | +> *"I need to add Stripe subscription management to this app. Before writing any code, create an Architectural Plan that includes:* |
| 29 | +> 1. *Which files you'll create or modify* |
| 30 | +> 2. *Which are Server Components vs Client Components (justify each)* |
| 31 | +> 3. *What database changes are needed (tables, RLS policies)* |
| 32 | +> 4. *What environment variables are required* |
| 33 | +> 5. *What edge cases or security concerns exist* |
| 34 | +> |
| 35 | +> *Wait for my approval before proceeding."* |
36 | 36 |
|
37 | | -When you ask Cursor to build something, it first reads the rules associated with your current file context. We have provided strict, production-tested rules that force the AI to: |
38 | | -- Await async `params` in Next.js 15 |
39 | | -- Use `@supabase/ssr` instead of deprecated packages |
40 | | -- Enforce strict Row Level Security (RLS) policies |
41 | | -- Implement `error.tsx` and `loading.tsx` boundaries automatically |
| 37 | +This single prompt change is worth more than any boilerplate. By forcing the AI to plan, you catch bad decisions before they become 500 lines of bad code. |
42 | 38 |
|
43 | | -By modifying these rules over time, you effectively "train" your own personal AI engineer to write code exactly how you want it. |
| 39 | +**Why this works:** LLMs are "eager to please" — they'll immediately start writing code to show you progress. But the first implementation they generate is usually based on the simplest (and often wrong) approach. Making them plan forces a deeper reasoning pass. |
44 | 40 |
|
45 | | -## 3. Reviewing AI Code (The Checklist) |
| 41 | +#### Stage 2: IMPLEMENT (Constrained Batches) |
46 | 42 |
|
47 | | -Never blind-commit AI code. Always review the diffs. Your job is now **Senior Code Reviewer**. |
| 43 | +Once you approve the plan, tell the AI to implement in small focused chunks: |
48 | 44 |
|
49 | | -Look for these common AI failures: |
50 | | -1. **The "Everything is Client" Problem:** Did it unnecessarily add `'use client'` to the top of a file that just fetches data? |
51 | | -2. **The "Silent Catch" Problem:** Check error handlers. AI loves to write `catch (error) { console.log(error) }` and move on, leaving the user with a broken UI and no feedback. |
52 | | -3. **The "N+1 Query" Problem:** Did the AI put a database fetch inside a `.map()` loop? |
| 45 | +> *"Execute Phase 1 only — create the database schema and RLS policies. Stop when Phase 1 is complete. Do not start the UI."* |
53 | 46 |
|
54 | | -## 4. Upgrading Your System |
| 47 | +Then review. Then: |
55 | 48 |
|
56 | | -This boilerplate gives you the foundation. |
| 49 | +> *"Phase 1 looks good. Execute Phase 2 — create the server actions for subscription management. Stop when done."* |
57 | 50 |
|
58 | | -If you want to move from "writing small apps faster" to "deploying full-scale SaaS businesses autonomously," you need to expand your toolset to include **Agentic Orchestration**. |
| 51 | +**Why batches?** AI output quality degrades as context grows. A model generating 200 lines of code in one shot makes 3-5x more errors than one generating 50 lines four times. Small batches = higher quality. |
59 | 52 |
|
60 | | -This involves using **MCP (Model Context Protocol)** servers to allow Claude to connect directly to your GitHub PRs, your Slack channels, and your live databases without leaving your editor. It also involves offloading backend logic to visual automation tools like **n8n**. |
| 53 | +#### Stage 3: VERIFY (Self-Audit) |
61 | 54 |
|
62 | | -If you're ready to make that jump, upgrade to **The AI Builder's Complete System**. It includes all the advanced MCP scripts, the full 5,000-word Masterclass on agentic workflow, and the pre-built n8n automations that handle the business side of your SaaS. |
| 55 | +After each batch, ask the AI to review its own work: |
63 | 56 |
|
64 | | -*Happy Vibe Coding.* |
| 57 | +> *"Review the server action you just wrote. Check:* |
| 58 | +> 1. *Did you use getUser() not getSession()?* |
| 59 | +> 2. *Did you validate input with Zod?* |
| 60 | +> 3. *Is the error handling consistent with our ActionResponse type?* |
| 61 | +> 4. *Did you add rate limiting for this sensitive endpoint?* |
| 62 | +> |
| 63 | +> *Fix any issues you find."* |
| 64 | +
|
| 65 | +On average, the AI catches 2-3 issues in its own code when asked to self-audit. That's 2-3 bugs you don't ship to production. |
| 66 | + |
| 67 | +--- |
| 68 | + |
| 69 | +### Why .mdc Rules Are Non-Negotiable |
| 70 | + |
| 71 | +AI models are trained on internet data from 2-3 years ago. When you ask an LLM to build a Next.js page, it generates Next.js 13 patterns. It uses: |
| 72 | + |
| 73 | +- `getSession()` instead of `getUser()` (deprecated, insecure) |
| 74 | +- Synchronous `params` (crashes in Next.js 15) |
| 75 | +- `@supabase/auth-helpers-nextjs` (deprecated in favor of `@supabase/ssr`) |
| 76 | +- `'use client'` on everything (ships unnecessary JavaScript) |
| 77 | + |
| 78 | +The `.cursor/rules/` directory overrides these defaults. Each `.mdc` file is read automatically by Cursor based on which files you're editing (via `globs` in the YAML frontmatter). |
| 79 | + |
| 80 | +You don't need to remember to mention security in your prompts. The rules inject that context automatically. |
| 81 | + |
| 82 | +### The Code Review Checklist |
| 83 | + |
| 84 | +Never blind-commit AI code. Your role has shifted from Writer to **Senior Reviewer**. |
| 85 | + |
| 86 | +After every AI-generated batch, check for: |
| 87 | + |
| 88 | +1. **The "Everything is Client" Problem** — Did it add `'use client'` to a file that only fetches data? If yes, remove it — Server Components are faster and more secure. |
| 89 | + |
| 90 | +2. **The "Silent Catch" Problem** — Check error handlers. AI loves `catch (error) { console.log(error) }`, leaving users with broken UIs and zero feedback. Force it to return structured errors. |
| 91 | + |
| 92 | +3. **The "N+1 Query" Problem** — Did the AI put a database query inside a `.map()` loop? That fires 100 queries instead of 1. Use `.in()` for batch fetching. |
| 93 | + |
| 94 | +4. **The "Hardcoded Secret" Problem** — Did it hardcode an API key or URL instead of using environment variables? |
| 95 | + |
| 96 | +5. **The "Missing Validation" Problem** — Is user input flowing directly into a database query without Zod validation? |
| 97 | + |
| 98 | +--- |
| 99 | + |
| 100 | +## Phase 2: Advanced Workflows (Complete Vault) |
| 101 | + |
| 102 | +### Debugging AI-Generated Code |
| 103 | + |
| 104 | +When AI code breaks, debugging follows a different process than debugging human-written code. |
| 105 | + |
| 106 | +#### The 3-Layer Debug Protocol |
| 107 | + |
| 108 | +**Layer 1: Trace the Error to a Rule Violation** |
| 109 | + |
| 110 | +Most AI bugs are caused by violating one of your `.mdc` rules. Before digging into the code, ask: |
| 111 | + |
| 112 | +> *"This error is occurring in `/app/dashboard/page.tsx`. Check our project rules and tell me which rules apply to this file. Then review the file against each applicable rule and identify any violations."* |
| 113 | +
|
| 114 | +The AI will cross-reference the file against your rules and often immediately identify the bug. |
| 115 | + |
| 116 | +**Layer 2: Version Mismatch Detection** |
| 117 | + |
| 118 | +If the error isn't a rule violation, it's likely a version mismatch. Ask: |
| 119 | + |
| 120 | +> *"Check `package.json` for the versions of Next.js, React, and Supabase. Then review the code in this file and tell me if any patterns are incompatible with these specific versions."* |
| 121 | +
|
| 122 | +Common finds: Next.js 14 patterns in a Next.js 15 project, React 18 patterns in React 19. |
| 123 | + |
| 124 | +**Layer 3: Isolation** |
| 125 | + |
| 126 | +If the first two layers don't find it: |
| 127 | + |
| 128 | +> *"Create a minimal reproduction of this bug in a single file. Strip out everything except the broken functionality. Show me the simplest code that reproduces the error."* |
| 129 | +
|
| 130 | +Isolation forces the AI to separate the bug from the complexity of your codebase. |
| 131 | + |
| 132 | +### Multi-File Orchestration |
| 133 | + |
| 134 | +For features spanning 5+ files, use the **Dependency Map** technique: |
| 135 | + |
| 136 | +> *"Before implementing, draw me a dependency map of this feature. Show which files import from which other files, and the order they should be created to avoid import errors."* |
| 137 | +
|
| 138 | +Then implement bottom-up: utilities first, then types, then server logic, then UI. |
| 139 | + |
| 140 | +### The Context Window Budget |
| 141 | + |
| 142 | +Every AI model has a context window limit. When you exceed it, the model starts "forgetting" earlier instructions — including your rules. |
| 143 | + |
| 144 | +**Symptoms of context overflow:** |
| 145 | +- The AI starts using `getSession()` again (it forgot the rule) |
| 146 | +- Generated code contradicts instructions from earlier in the conversation |
| 147 | +- The AI starts repeating itself or generating generic code |
| 148 | + |
| 149 | +**The fix:** Start a new conversation. Your `.mdc` rules survive conversation resets because they're loaded from disk, not from chat history. This is the entire architectural advantage of rule files over prompt engineering. |
| 150 | + |
| 151 | +--- |
| 152 | + |
| 153 | +## Phase 3: Agentic Power Tools (Complete Vault) |
| 154 | + |
| 155 | +### MCP: Giving Your AI Superpowers |
| 156 | + |
| 157 | +MCP (Model Context Protocol) lets your AI connect to external tools directly from your IDE. |
| 158 | + |
| 159 | +With the 4 MCP servers in Vibe Stack, your AI can: |
| 160 | + |
| 161 | +1. **Read your GitHub PRs** — "Review the latest PR and tell me if it violates any of our architecture rules" |
| 162 | +2. **Inspect your Supabase schema** — "Check if all tables have RLS enabled and flag any that don't" |
| 163 | +3. **Navigate your file system** — "Find all files that import from the deprecated auth-helpers package" |
| 164 | +4. **Browse your running app** — "Take a screenshot of the /dashboard page and tell me if the layout matches the mockup" |
| 165 | + |
| 166 | +### n8n: Automating the Business Layer |
| 167 | + |
| 168 | +Code is only half of a SaaS. The other half is distribution, lead capture, and customer management. |
| 169 | + |
| 170 | +The 3 n8n workflows included in Vibe Stack automate: |
| 171 | + |
| 172 | +1. **Lead Capture** — Every GitHub star triggers an automated welcome email with a link to your paid product |
| 173 | +2. **Content Distribution** — New Dev.to articles are automatically promoted on social media |
| 174 | +3. **Customer Onboarding** — Every Gumroad sale creates a CRM entry and sends a branded onboarding email |
| 175 | + |
| 176 | +These workflows replace tools like Mailchimp ($50/mo), Zapier ($30/mo), and a CRM ($30/mo) — saving $110/month on a self-hosted n8n instance. |
| 177 | + |
| 178 | +### The Complete Feature Prompt Template |
| 179 | + |
| 180 | +For any new feature, use this exact prompt template: |
| 181 | + |
| 182 | +``` |
| 183 | +I need to build [FEATURE NAME]. |
| 184 | +
|
| 185 | +Context: |
| 186 | +- This is a [Next.js 15 / React 19 / Supabase] project |
| 187 | +- The user is authenticated via cookie-based Supabase SSR |
| 188 | +- We use Server Components by default, Client Components only when needed |
| 189 | +- All server actions use Zod validation and return ActionResponse<T> |
| 190 | +
|
| 191 | +Requirements: |
| 192 | +[List specific requirements] |
| 193 | +
|
| 194 | +Before writing code: |
| 195 | +1. Create an Architectural Plan listing all files to create/modify |
| 196 | +2. Specify Server vs Client for each component (justify) |
| 197 | +3. List database changes (tables, columns, RLS policies) |
| 198 | +4. Identify security considerations |
| 199 | +5. Estimate the number of implementation phases |
| 200 | +
|
| 201 | +Wait for my approval before implementing. |
| 202 | +``` |
| 203 | + |
| 204 | +This template encodes all of your architectural constraints into a single reusable block, ensuring consistent high-quality output regardless of which model you're using. |
| 205 | + |
| 206 | +--- |
| 207 | + |
| 208 | +*Happy Vibe Coding. Build fast, ship secure, and let the AI do the heavy lifting — within the guardrails you've set.* |
0 commit comments