Quick solutions for common AI-assisted development problems
- Quick Symptom Lookup
- Emergency Flowcharts
- AI Behavior Issues
- Code Quality Problems
- Workflow Bottlenecks
- Tool-Specific Problems
- Common Error Patterns
- Debugging Checklist
- When to Ask for Help
Find your problem → Get quick fix → See detailed solution
| Symptom | Likely Cause | Quick Fix | Details |
|---|---|---|---|
| AI forgets previous decisions | Context overflow >85% | Externalize to .md files | Context Management |
| Generates code that doesn't compile | Missing context or incorrect syntax | Provide compiler error + context | Error Recovery |
| Suggests outdated patterns | No project context loaded | Create .cursorrules file | Context-Aware Prompting |
| Makes up APIs/functions that don't exist | Hallucination | Be specific, reference real code | Specificity Problem |
| Contradictory suggestions | Context confusion | Start fresh session with summary | Session Reset |
| Asks too many clarifying questions | Vague prompt | Use prompt templates | Prompt Templates |
| Implements wrong feature | Misunderstood requirements | Add concrete examples | Example-Driven Prompts |
| Ignores constraints | Constraints not explicit | Use numbered list with emphasis | Constraint Communication |
| Over-engineers solution | No simplicity guidance | Add "Keep it simple" + constraints | Over-Engineering |
| Generates insecure code | Security not mentioned | Use security checklist | Security Review |
| Slow responses | Context too large | Remove irrelevant files | Context Optimization |
| Incomplete responses (cuts off) | Context limit reached | Break into smaller prompts | Response Truncation |
| Different style than project | No style guide referenced | Add style examples | Code Style Inconsistency |
| Can't debug complex issues | Insufficient data | Provide logs + stack traces | Debugging Guide |
| Suggests wrong libraries/tools | Lacks project knowledge | Specify tech stack upfront | Tool Selection |
| Symptom | Likely Cause | Quick Fix | Details |
|---|---|---|---|
| Buggy code on first try | Vague requirements | Use detailed prompts | Code Quality Section |
| Missing error handling | Not specified in prompt | Add error handling requirement | Error Handling |
| No input validation | Security not mentioned | Specify validation rules | Input Validation |
| Poor variable naming | No style guide | Provide naming examples | Naming Conventions |
| Duplicated code | No DRY instruction | Request abstraction | Code Duplication |
| Hard-coded values | No configuration guidance | Ask for config extraction | Hard-Coded Values |
| Missing edge case handling | Cases not mentioned | List edge cases explicitly | Edge Cases |
| Performance issues | Efficiency not prioritized | Add performance requirements | Performance Problems |
| No tests generated | Tests not requested | Use test templates | Testing Templates |
| SQL injection vulnerability | Security not reviewed | Add security audit | SQL Injection |
| XSS vulnerability | Input not sanitized | Specify sanitization | XSS Prevention |
| Memory leaks | Cleanup not specified | Add cleanup requirements | Memory Leaks |
| Race conditions | Concurrency not addressed | Specify sync requirements | Race Conditions |
| Tight coupling | Architecture not defined | Request loose coupling | Tight Coupling |
| Missing documentation | Docs not requested | Use doc templates | Documentation |
| Symptom | Likely Cause | Quick Fix | Details |
|---|---|---|---|
| Too many iterations | Poor initial prompts | Study prompt foundations | Prompting Guide |
| Context constantly full | Not using .md files | Externalize plans to docs/ | MD-Based Workflow |
| Losing work between sessions | No session handoffs | Write handoff docs | Session Handoffs |
| Slow development pace | Sequential instead of parallel | Use parallel prompting | Parallel Prompting |
| Repeating same explanations | No persistent instructions | Create .cursorrules | Project Context |
| Can't resume work easily | No documentation | Write current-state.md | Resumability |
| Debugging takes too long | Insufficient diagnostic info | Use debug templates | Debug Templates |
| Breaking existing features | No integration tests | Add test requirements | Integration Testing |
| Unclear next steps | No task breakdown | Use TodoWrite/Task Manager MCP | Task Management |
| Overwhelmed by complexity | Large tasks not broken down | Use multi-step workflows | Multi-Step Workflows |
| Inconsistent code style | No conventions documented | Write style guide | Style Guide Creation |
| Forgetting project details | No knowledge base | Create project-knowledge.md | Project Memory |
| MCP servers not helping | Wrong MCP for task | Check MCP decision tree | MCP Usage |
| High AI costs | Inefficient prompting | Optimize prompts | Cost Optimization |
| Team inconsistency | No shared practices | Create team templates | Team Coordination |
| Tool | Symptom | Quick Fix | Details |
|---|---|---|---|
| Claude Code CLI | Can't find files | Use Glob tool | File Search |
| Claude Code CLI | Slow performance | Check context usage | Performance |
| Claude Code CLI | Task agent fails | Check agent type | Agent Selection |
| Cursor | Composer fails | Reduce file count | Composer Issues |
| Cursor | Wrong file changes | Be more specific | File Selection |
| Windsurf | Cascade confusion | Clear context | Cascade Reset |
| Zed | Workflow not working | Check workflow syntax | Workflow Debug |
| Droid CLI | Plan mode stuck | Break into smaller plan | Plan Issues |
| Droid CLI | Act mode fails | Review plan clarity | Act Issues |
| MCP DevTools | Can't connect | Check browser state | DevTools Connection |
| MCP Context7 | Wrong docs returned | Specify query better | Context7 Queries |
| Clavix | Poor spec quality | Provide more detail | Spec Generation |
| Git Integration | Commit failures | Check permissions | Git Problems |
AI generates unusable code?
│
├─ Check 1: Is context > 85% full?
│ ├─ YES → Externalize to .md files
│ │ Start new session with handoff
│ │ ✓ Problem solved
│ │
│ └─ NO → Continue to Check 2
│
├─ Check 2: Is your prompt vague?
│ ├─ YES → Use 4-component framework
│ │ - Clarity: What you want
│ │ - Context: What AI needs
│ │ - Constraints: Boundaries
│ │ - Criteria: Success metrics
│ │ ✓ Try again with better prompt
│ │
│ └─ NO → Continue to Check 3
│
├─ Check 3: Does AI have project context?
│ ├─ NO → Load architecture docs
│ │ Reference existing patterns
│ │ Create .cursorrules
│ │ ✓ Retry with context
│ │
│ └─ YES → Continue to Check 4
│
├─ Check 4: Is this the right model for the task?
│ ├─ NO → Switch to better model
│ │ (Claude Sonnet/Opus for complex)
│ │ (GLM/Qwen for standard tasks)
│ │ ✓ Retry with better model
│ │
│ └─ YES → Continue to Check 5
│
└─ Check 5: Is the task too complex for one prompt?
├─ YES → Break into multi-step workflow
│ Verify each step
│ ✓ Success!
│
└─ NO → Problem is elsewhere
→ Check detailed sections below
Stuck debugging for >30 minutes?
│
├─ Have you provided error message + stack trace?
│ ├─ NO → Gather diagnostics:
│ │ - Full error message
│ │ - Stack trace
│ │ - Relevant code (file:line)
│ │ - What you were doing
│ │ ✓ Provide to AI, retry
│ │
│ └─ YES → Continue
│
├─ Have you tried minimal reproduction?
│ ├─ NO → Create minimal test case
│ │ Remove unrelated code
│ │ Isolate the problem
│ │ ✓ Debug minimal case
│ │
│ └─ YES → Continue
│
├─ Have you checked logs/console?
│ ├─ NO → Check:
│ │ - Browser console (F12)
│ │ - Server logs
│ │ - Network tab
│ │ - Database logs
│ │ ✓ Gather all error info
│ │
│ └─ YES → Continue
│
├─ Have you searched for similar issues?
│ ├─ NO → Search:
│ │ - Error message in Google
│ │ - GitHub issues
│ │ - Stack Overflow
│ │ ✓ Try suggested solutions
│ │
│ └─ YES → Continue
│
├─ Have you asked AI with full context?
│ ├─ NO → Use debug template:
│ │ - Error + stack trace
│ │ - Relevant code
│ │ - What you tried
│ │ - Expected vs actual
│ │ ✓ AI can help now
│ │
│ └─ YES → Time to ask humans
│ → See "When to Ask for Help" section
Most debugging failures aren't AI failures—they're context failures.
If you're being forced to "start over" on a project, you're either:
- 5% - Developing something super-overly-complex on corporate level
- 95% - Trying to forcefully move forward instead of instructing AI properly
The real issue: Lack of context knowledge in the HUMAN, not in the AI window.
❌ Console errors not copied ❌ Network requests not checked ❌ Reproduction steps not documented ❌ Expected vs actual behavior not explained ❌ Environment details not provided
Result: AI guesses blindly. You waste hours. Nothing works.
Before asking AI to debug:
- Open DevTools (F12)
- Check Console - Copy ALL errors
- Check Network - Find failed requests
- Reproduce - Write exact steps
- Gather Context - Error + behavior + environment
Then provide AI with complete picture:
"Getting this error: [paste exact error]
Console: [error message + stack trace]
Network: [failed request + response]
Reproduction: [exact steps]
Expected: [what should happen]
Actual: [what happens]
Code: [file:line]
Can you identify the root cause?"
Without context:
- AI makes 20 guesses
- You try 20 "fixes"
- Still broken
- You blame AI
With context:
- AI identifies root cause immediately
- One targeted fix
- Problem solved
- Move forward
You wouldn't throw away a 20-hour project in traditional development.
Don't do it in AI-assisted development either.
- ✅ New chat window (fresh context) - Good
- ❌ New project (throw away work) - Bad
Fix the process (context gathering), not the project.
📖 Full Guide: The Human Context Problem in AI Debugging
Learn:
- Real-world examples (before/after)
- Complete context gathering checklist
- How to lead AI through complex debugging
- When tools like DevTools MCP help
- Common scenarios and solutions
5 minutes reading = Hours saved debugging
Token costs too high?
│
├─ Are you repeating context every prompt?
│ ├─ YES → Externalize to .md files
│ │ Reference instead of repeating
│ │ Use .cursorrules for conventions
│ │ ✓ 70-90% token savings
│ │
│ └─ NO → Continue
│
├─ Are you loading too many files?
│ ├─ YES → Progressive context loading
│ │ Include only what's needed
│ │ Add more files as needed
│ │ ✓ 50-80% token savings
│ │
│ └─ NO → Continue
│
├─ Are prompts overly verbose?
│ ├─ YES → Use templates
│ │ Reference documentation
│ │ Be concise but specific
│ │ ✓ 40-60% token savings
│ │
│ └─ NO → Continue
│
├─ Are you using multi-step when single-shot works?
│ ├─ YES → Combine related tasks
│ │ Use single comprehensive prompt
│ │ ✓ Reduce prompt count
│ │
│ └─ NO → Continue
│
└─ Using expensive model for simple tasks?
├─ YES → Use cheaper models:
│ - GLM for standard coding
│ - Qwen for boilerplate
│ - Claude only for complex
│ ✓ 80-95% cost savings
│
└─ NO → Costs are probably justified
Track ROI: time saved vs cost
New code breaks existing features?
│
├─ Do you have tests?
│ ├─ NO → Write tests ASAP
│ │ Run before and after changes
│ │ ✓ Prevent future breakage
│ │
│ └─ YES → Continue
│
├─ Did you run tests before merging?
│ ├─ NO → Always run tests!
│ │ git checkout previous commit
│ │ ✓ Revert and test
│ │
│ └─ YES → Tests might be incomplete
│
├─ Did you specify "don't break existing features"?
│ ├─ NO → Always specify:
│ │ "Maintain backward compatibility"
│ │ "Don't change existing API"
│ │ ✓ AI will be more careful
│ │
│ └─ YES → Continue
│
├─ Did you review changes before committing?
│ ├─ NO → Always review diffs
│ │ Check for unexpected changes
│ │ ✓ Catch issues early
│ │
│ └─ YES → Continue
│
└─ Was the scope too broad?
├─ YES → Break into smaller changes
│ Test incrementally
│ ✓ Easier to isolate breakage
│
└─ NO → Need better integration tests
Add test for broken scenario
✓ Prevent regression
Symptom: AI implements something completely different from what you wanted.
Root causes:
- Ambiguous language in prompt
- Missing examples
- No success criteria defined
- Technical terms used differently than AI expects
Solutions:
Solution 1: Use Examples
Bad: “Create a form with validation.”
Good: “Create a form with these exact behaviors:
- Email field: validate format on blur, show error ‘Invalid email’ below field.
- Submit disabled until all fields valid.
- Example: user types ‘test@’ → blur → red border + error shown.”
Solution 2: Define Success Criteria
Add to every prompt:
“Success means:
- [Specific measurable outcome 1]
- [Specific measurable outcome 2]
- [Specific measurable outcome 3]”
Solution 3: Show, Don't Just Tell
Instead of: “Make it look professional.”
Show: “Use this design: [link to screenshot or design].”
Or: “Match the style of [existing component].”
Prevention:
- Read Prompt Foundations
- Use Prompt Templates
- Always include concrete examples
Symptom: AI generates code using functions/APIs that don't exist in your project or libraries.
Example:
// AI generates:
import { magicValidator } from 'my-app/validators'; // DOESN'T EXIST!
user.validateMagically(); // DOESN'T EXIST!Root causes:
- AI guessing based on patterns
- Confusing your project with similar projects
- Outdated training data
Solutions:
Solution 1: Reference Real Code
Use the existing validation pattern from
src/utils/validation.ts:export function validateEmail(email: string): boolean { return EMAIL_REGEX.test(email); }Don’t make up new functions. Use this exact pattern.
Solution 2: Be Explicit About What Exists
Available utilities in
src/utils/:
validation.ts:validateEmail,validatePasswordcrypto.ts:hash,verifystrings.ts:slugify,truncateUse only these. Don’t create new utilities.
Solution 3: Verify After Generation
After generating code, verify that all imported functions exist.
If you’re unsure, ask before including them in your response.
Prevention:
- Provide explicit file references
- Use "Follow pattern in [existing file]"
- Add verification step to prompts
Symptom: AI generates code that violates specified constraints (file size limits, no external dependencies, etc.)
Example:
You: "Don't use any external libraries"
AI: *Installs 5 npm packages*
Root causes:
- Constraints buried in long prompt
- Constraints stated weakly ("try to avoid..." vs "MUST NOT")
- AI prioritizes functionality over constraints
Solutions:
Solution 1: Emphasize Constraints
CRITICAL CONSTRAINTS (must follow):
- No external dependencies (use only Node.js built-ins).
- File size max 100 lines.
- No database queries (work with in-memory data).
If any constraint is impossible, ask before generating code.
Solution 2: Front-Load Constraints
Structure your prompt:
- Constraints first.
- Requirements second.
- Implementation details last.
This ensures the AI processes constraints before the solution.
Solution 3: Use Self-Verifying Prompts
After generating code, verify:
- No external dependencies used.
- File is under 100 lines.
- No database queries.
If any check fails, revise the code before responding.
Prevention:
- Always list constraints first
- Use strong language ("MUST", "REQUIRED", "NEVER")
- Add verification step
Symptom: AI generates overly complex solutions with unnecessary abstractions, design patterns, or features.
Example:
You: "Create a function to add two numbers"
AI: *Creates abstract factory pattern with interfaces, dependency injection, strategy pattern...*
Root causes:
- No simplicity guidance
- AI trained on enterprise codebases
- No scope boundaries
Solutions:
Solution 1: YAGNI Principle
“Create a simple function to add two numbers.
Requirements:
- Just add two numbers, nothing more.
- No abstractions, interfaces, or design patterns.
- Keep it simple (KISS principle).
- YAGNI: You Aren’t Gonna Need It.
- Maximum 5 lines of code.”
Solution 2: Specify Scope Explicitly
“Implement only what’s needed for the current use case. Don’t add:
- Abstract base classes.
- Dependency injection.
- Strategy patterns.
- Configuration systems.
- Logging frameworks.
Just solve the immediate problem.”
Solution 3: Reference Simple Examples
“Follow the simplicity of this example: [paste simple, direct code]
Keep the same level of simplicity—no additional complexity.”
Prevention:
- Always add "Keep it simple"
- Specify maximum lines of code
- Reference simple examples
- Use YAGNI/KISS principles explicitly
Symptom: AI forgets previous decisions, contradicts earlier responses, or "doesn't remember" what you discussed.
Root causes:
- Context window >85% full
- Too many messages in conversation
- Important info early in conversation (recency bias)
Solutions:
Solution 1: Session Handoff
Write a summary of your work to
docs/context/session-handoff.md:
- What was implemented.
- Decisions made (with reasoning).
- Current state.
- Next steps.
Be comprehensive—you’ll start a new session after this.
New session prompt: “Read
docs/context/session-handoff.mdand continue from where we left off.”
Solution 2: Externalize Decisions
When the AI makes an important decision, document it in
docs/decisions/[number]-[topic].md:
- What you decided.
- Why you decided it.
- Alternatives considered.
- Implications.
Future prompts: “Per
docs/decisions/003-auth-approach.md, implement …”
Solution 3: Context Refresh
If context usage exceeds 70%:
“Summarize our key decisions and current progress. Then we’ll start fresh with that summary.”
Prevention:
- Monitor context usage (60-85% rule)
- Externalize to .md files early
- Use Context Management strategies
Symptom: AI takes >30 seconds to respond, or responses are delayed.
Root causes:
- Context too large (>100k tokens)
- Too many files included
- Complex request requiring lots of processing
- API/model provider issues
Solutions:
Solution 1: Reduce Context
“Remove all files from context except:
src/api/users.ts(relevant to the current task).docs/architecture/api-design.md.Now implement [task].”
Solution 2: Progressive Loading
Don’t load all files at once.
- Start with minimal context plus an architecture overview.
- Add files only as needed for the current step.
- Remove files when a task is complete.
Solution 3: Use Faster Model
For simple tasks:
- Use GLM (fast, cheap).
- Use Gemini Flash.
- Use Qwen Coder.
For complex tasks:
- Use Claude Sonnet.
- Use Claude Opus (only when necessary).
Prevention:
- Keep context under 50k tokens when possible
- Use Progressive Loading
- Choose right model for task complexity
Symptom: AI response cuts off mid-sentence or mid-code block.
Root causes:
- Response length limit reached
- Context window completely full
- Output token limit hit
Solutions:
Solution 1: Ask to Continue
“Continue from where you left off.”
Or be more specific: “Continue the function implementation from line 45.”
Solution 2: Request Smaller Chunks
Instead of: “Implement entire authentication system.”
Do: “Implement just the login function (Step 1 of 5).”
Complete each step before moving to the next.
Solution 3: Reduce Prompt Size
Shorten your prompts:
- Use template references instead of full requirements.
- Reference docs instead of pasting content.
- Focus on the current subtask only.
Prevention:
- Break large tasks into steps
- Keep prompts concise
- Monitor context usage
Symptom: AI generates code in different style (tabs vs spaces, naming conventions, patterns) than your project.
Root causes:
- No style guide provided
- No example code referenced
- AI using generic style
Solutions:
Solution 1: Create .cursorrules
Create
.cursorrulesin the project root:# Code Style Guide ## Formatting - Indentation: 2 spaces (not tabs) - Quotes: single quotes for strings - Semicolons: required - Line length: max 100 characters ## Naming Conventions - Files: kebab-case (user-service.ts) - Components: PascalCase (UserCard) - Functions: camelCase (getUserById) - Constants: UPPER_SNAKE_CASE (API_URL) ## Patterns - React: functional components only (no classes) - Async: use async/await (no .then()) - Errors: custom error classes (not generic Error) - Imports: group by type (external, internal, relative) ## Examples [Include code snippets showing style]Now the AI automatically follows these conventions.
Solution 2: Reference Style Examples
“Follow the exact code style from
src/components/UserCard.tsx:
- Same formatting.
- Same naming patterns.
- Same import structure.
- Same comment style.”
Solution 3: Request Style Match
“Generate code matching the style of [existing file]. Specifically match:
- Indentation.
- Naming conventions.
- Function structure.
- Comment style.”
Prevention:
- Always create .cursorrules early
- Reference existing files for patterns
- Include style requirements in prompt templates
Symptom: AI-generated code has obvious bugs, doesn't compile, or fails immediately when run.
Root causes:
- Ambiguous requirements
- Missing edge cases
- Incomplete context
- Model limitations
Solutions:
Solution 1: Add Test Cases to Prompt
“Create a function to calculate factorial.
Must handle:
factorial(0) = 1factorial(1) = 1factorial(5) = 120factorial(-1)throwsError('Negative numbers not allowed')factorial(100)usesBigIntfor large numbers.Include test cases demonstrating all scenarios work correctly.”
Solution 2: Request Self-Verification
“After generating code, verify:
- Code compiles without errors.
- All test cases pass.
- Edge cases are handled.
- No obvious bugs remain.
If verification fails, fix the code before responding.”
Solution 3: Provide More Context
“Implement user login.
Context:
- Uses JWT authentication (
src/utils/jwt.ts).- Password hashing with bcrypt (
src/utils/crypto.ts).- User model in
src/models/User.ts.- Error handling pattern from
src/api/posts.ts.Reference these files for patterns and interfaces.”
Prevention:
- Use detailed prompts with examples
- Include test scenarios
- Reference existing working code
- Add self-verification step
Symptom: AI generates happy-path code with no try-catch, no validation, no error responses.
Root causes:
- Error handling not mentioned in prompt
- AI prioritizes functionality over robustness
Solutions:
Solution 1: Explicit Error Requirements
“Implement a file upload endpoint.
Error handling requirements:
- File too large (>10MB) →
400with ‘File exceeds maximum size’.- Invalid file type →
400with ‘Only PDF, PNG, JPG allowed’.- Network error during upload →
500with ‘Upload failed, please retry’.- Missing file in request →
400with ‘No file provided’.- S3 upload fails →
500with ‘Storage error’ and log the error.Use try-catch blocks. Return consistent error format:
{ error: { code, message } }.”
Solution 2: Reference Error Patterns
“Follow the error handling pattern from
src/api/users.ts:
- Try-catch around async operations.
- Specific error messages for different failures.
- Proper HTTP status codes.
- Error logging.
- User-friendly error messages.”
Solution 3: Add to .cursorrules
Add this to
.cursorrules:## Error Handling - ALWAYS use try-catch for async operations. - ALWAYS validate input. - ALWAYS return user-friendly error messages. - ALWAYS log errors with context. - Use custom error classes (`ValidationError`, `NotFoundError`, etc.).
Prevention:
- Always mention error handling in prompts
- Create error handling template
- Reference error patterns
- Add to project conventions
Symptom: AI generates code that accepts any input without validation, leading to crashes or security vulnerabilities.
Root causes:
- Validation not specified
- Security not prioritized
Solutions:
Solution 1: Specify Validation Rules
“Create a user registration endpoint.
Validation requirements:
- Email: required, valid format (RFC 5322), max 255 characters, unique (check database).
- Password: required, minimum 8 characters, must include uppercase, lowercase, number, special character.
- Name: required, 2–100 characters, letters and spaces only (no special characters).
Return
400with field-specific errors for validation failures.”
Solution 2: Use Validation Library
“Use Zod for validation (already installed):
const registerSchema = z.object({ email: z.string().email().max(255), password: z.string().min(8).regex(/[A-Z]/).regex(/[a-z]/).regex(/[0-9]/), name: z.string().min(2).max(100).regex(/^[a-zA-Z ]+$/) });Validate the request body against this schema before processing.”
Solution 3: Security-First Prompt
“Implement [feature] with security as the top priority.
Input validation:
- Sanitize all user input.
- Validate types and formats.
- Check length limits.
- Prevent SQL injection.
- Prevent XSS.
- Use a whitelist approach (accept only known-good data).
Fail securely: reject invalid input rather than trying to fix it.”
Prevention:
- Always specify validation in prompts
- Use validation libraries
- Create validation template
- Security-first mindset
Symptom: AI-generated code works but is slow (N+1 queries, inefficient algorithms, etc.)
Root causes:
- Performance not mentioned
- AI prioritizes simplicity/readability over efficiency
Solutions:
Solution 1: Specify Performance Requirements
“Implement user search with performance requirements.
Current: 5 seconds for 10,000 users. Target: under 500 ms for 10,000 users.
Optimization requirements:
- Use database indexes (email, name columns).
- Pagination (20 results per page).
- Debounce search input (500 ms).
- Cache results (5 minutes).
- Avoid N+1 queries.
Measure and verify the performance meets the target.”
Solution 2: Review and Optimize
“Review this code for performance issues: [paste code]
Identify:
- N+1 query problems.
- Inefficient algorithms (O(n²) or worse).
- Missing indexes.
- Unnecessary computations.
- Memory leaks.
Then optimize to meet the <500 ms target.”
Solution 3: Request Complexity Analysis
“Implement [algorithm] with:
- Time complexity:
O(n log n)or better.- Space complexity:
O(n)or better.Include Big-O analysis in comments. If the complexity target can’t be met, explain why.”
Prevention:
- Always mention performance requirements
- Specify acceptable complexity
- Request performance analysis
- Profile before and after
Symptom: AI generates code with security vulnerabilities (SQL injection, XSS, insecure authentication, etc.)
Root causes:
- Security not mentioned
- Convenience prioritized over security
- Model trained on insecure examples
Solutions:
Solution 1: Security Audit Prompt
“Security audit this code: [paste code]
Check for:
- SQL injection vulnerabilities.
- XSS vulnerabilities.
- CSRF vulnerabilities.
- Insecure authentication/authorization.
- Sensitive data exposure.
- Insecure cryptography.
- Injection flaws.
- Broken access control.
For each vulnerability found, provide:
- Severity (Critical/High/Medium/Low).
- Exploitation scenario.
- Fix with secure code example.”
Solution 2: Security Requirements Upfront
“Implement user authentication with these security requirements:
Authentication:
- Passwords hashed with argon2 (not bcrypt or plain).
- Salt unique per user.
- JWT tokens with expiration.
- Secure random token generation.
- HTTPS only (no HTTP).
Authorization:
- User can only access their own resources.
- Admin role checked server-side.
- No client-side auth checks.
Input validation:
- SQL parameterized queries (prevent injection).
- HTML sanitization (prevent XSS).
- CSRF tokens for state-changing operations.
Follow OWASP Top 10 best practices.”
Solution 3: Use Security Checklist
“After generating code, verify security:
- No SQL injection (parameterized queries).
- No XSS (input sanitization).
- No CSRF (tokens for POST/PUT/DELETE).
- Passwords hashed (argon2/bcrypt).
- JWT secure (secret in env, expiration set).
- Authorization checked server-side.
- Sensitive data not logged.
- HTTPS enforced.
If any check fails, fix the code before responding.”
Prevention:
- Security-first prompting
- Use security templates
- Regular security audits
- Follow OWASP guidelines
Symptom: Takes 5-10+ iterations to get working code.
Root causes:
- Vague initial prompts
- No examples provided
- Missing context
Solutions:
Solution 1: Study Prompt Foundations
Take 30 minutes to read the Prompt Foundations Guide.
Learn the four-component framework:
- Clarity: What you want (be specific).
- Context: What the AI needs (files, patterns).
- Constraints: Boundaries (limits, requirements).
- Criteria: Success metrics (how to verify).
Apply it to the next prompt.
Solution 2: Use Templates
Don’t write prompts from scratch. Use the Template Library:
- Find a template matching your task.
- Fill in the
{{PLACEHOLDERS}}.- Send it to the AI.
First-try success rate: 70–90%.
Solution 3: Add Examples
Every prompt should include examples:
- Example 1: [Input] → [Expected output].
- Example 2: [Input] → [Expected output].
- Example 3: [Edge case] → [Expected handling].
Examples eliminate ambiguity.
Prevention:
- Always use detailed prompts
- Include examples
- Define success criteria
- Use templates
Symptom: You find yourself explaining the same conventions, patterns, or requirements in every prompt.
Root causes:
- No persistent project configuration
- Each session starts fresh
Solutions:
Solution 1: Create .cursorrules
Create
.cursorrulesin the project root with all conventions:# Project Conventions ## Tech Stack - Frontend: React + TypeScript + Tailwind + Vite - Backend: Node.js + Express + TypeScript - Database: PostgreSQL + Prisma ORM - Testing: Jest + React Testing Library ## Code Style [Your style guide] ## Patterns [Your common patterns] ## Don’t Repeat - Always use argon2 for passwords (not bcrypt). - Always use Prisma (not raw SQL). - Always use functional components (not classes). - [Other frequent instructions]Now the AI automatically knows these without you repeating.
Solution 2: Create docs/conventions.md
Create a detailed conventions directory:
docs/conventions/ ├── code-style.md ├── api-design.md ├── database-patterns.md ├── testing-guidelines.md └── security-requirements.mdReference it in prompts: “Follow conventions in
docs/conventions/api-design.md.”
Solution 3: Session Setup Prompt
At the start of each session:
“Project setup:
- Read
.cursorrules.- Read
docs/architecture/overview.md.- Follow patterns from
src/api/users.ts.Confirm you understand the project structure.”
Then proceed with the specific task.
Prevention:
- Create .cursorrules early
- Document conventions
- Reference docs in prompts
Symptom: When you come back to a project after days/weeks, it takes a long time to figure out where you left off.
Root causes:
- No session documentation
- No current-state tracking
- Relying on memory
Solutions:
Solution 1: Session Handoff Pattern
At the end of each work session:
“Create a session handoff document at
docs/context/session-[date].md:
- [What was implemented]
- [What was fixed]
- [What was decided]
- [Feature X]: 80% complete (needs task A, task B)
- [Feature Y]: Not started
- [Bug Z]: Fixed and tested
- [Architectural decisions made]
- [Technical challenges encountered]
- [Workarounds implemented]
- [First task to tackle]
- [Second task]
- [Third task]
- [List of files changed] ”
Solution 2: Current State Document
Maintain
docs/context/current-state.md.After major changes: “Update
docs/context/current-state.mdwith:
- Recently completed features.
- In-progress features.
- Known issues.
- Next priorities.”
Solution 3: Git Commit History
Write detailed commit messages, e.g.:
feat(auth): implement JWT token refresh - Added refresh token to user model - Created /auth/refresh endpoint - Tokens expire after 24h, refresh extends - Tests added for token refresh flow Related to feature spec: docs/plans/auth-feature.md Next: Implement logout (invalidate tokens)
Prevention:
- End-of-session handoffs
- Maintain current-state.md
- Detailed commit messages
- Use TodoWrite/Task Manager MCP
Symptom: Monthly AI bills are surprisingly high.
Root causes:
- Inefficient prompting (repeating context)
- Using expensive models for simple tasks
- Not optimizing context
Solutions:
Solution 1: Track Usage
Monitor:
- Tokens per feature (prompt + response).
- Model used for each task.
- Context size per session.
Baseline: “Feature X took 50k tokens with Claude Sonnet = $1.50.” Target: “Feature X should take 10k tokens = $0.30.”
Solution 2: Use Right Model for Task
Task complexity vs. model cost:
Simple (boilerplate, templates): → GLM (super cheap, 5h coding time limit). → Qwen Coder (cheap). → Gemini Flash (cheap).
Standard (most features): → Claude Sonnet (balanced). → GLM (great price/performance).
Complex (architecture, debugging): → Claude Sonnet (reliable). → Claude Opus (only when necessary).
Cost difference: 10–50× between models.
Solution 3: Optimize Prompts
Apply token optimization:
Before (50k tokens): “[Paste entire architecture doc] [Paste 10 code files] [Detailed explanation] Now implement feature X.”
After (5k tokens): “Reference
docs/architecture/overview.md(read earlier). Implement feature X following the pattern insrc/api/users.ts. Requirements: [concise list].”90% token reduction!
Prevention:
- Use Prompt Optimization
- Choose appropriate models
- Externalize context to .md files
- Use templates
Symptom: Claude Code CLI can't find files you know exist.
Solutions:
- Use the Glob tool (not bash
find): “Find files matchingsrc/components/*.tsx.” - Be specific with patterns: “Find files matching
src/api/**/*.ts.” - Check the current directory: “What is the current working directory? List files in this directory.”
Symptom: Task/Explore agents fail or return poor results.
Solutions:
- Choose the right agent:
- Explore agent: “Quick search for authentication files.”
- Plan agent: “Plan multi-step feature implementation.”
- General: “Complex research requiring multiple tools.”
- Specify thoroughness: “Use Explore agent with ‘very thorough’ mode to find all test files.”
- Break down tasks if too complex:
- Instead of: “Agent: research entire codebase.”
- Do: “Agent: find authentication-related files in
src/.” - Then: “Agent: analyze authentication flow in [found files].”
Symptom: Cursor Composer times out or produces poor results.
Solutions:
- Reduce file count in Composer:
- Include a maximum of 5–10 files.
- Use
@codebasesparingly (very expensive). - Reference specific files with
@file.
- Clear context: press
Cmd+K→ Clear context → start fresh. - Use Chat instead for complex tasks:
- Composer: quick edits, small changes.
- Chat: research, planning, complex features.
Symptom: Droid's plan mode gets stuck or generates poor plans.
Solutions:
- Be specific with planning requests:
droid plan "Implement JWT authentication with refresh tokens, including user registration, login, token refresh endpoints". - Break large plans into phases:
droid plan "Phase 1: User registration and login".- After completing Phase 1, run
droid plan "Phase 2: JWT refresh token mechanism".
- Review and edit the generated plan before executing:
- Plans are stored in the
.tasks/directory. - Edit them manually if needed.
- Plans are stored in the
Symptom: DevTools MCP can't connect to browser.
Solutions:
- Check that Chrome or Edge is running.
- Ensure the DevTools protocol is enabled (
--remote-debugging-port=9222). - Restart the MCP server.
- Verify MCP configuration in settings.
- Confirm no firewall is blocking
localhost:9222.
What it means: Accessing property on null/undefined object.
Common causes:
- API returned null but not checked
- Array/object destructuring on undefined
- Async timing issue (accessing before loaded)
AI prompting fix:
“Fix null pointer error: [paste error + code]
Add null checks:
- Check if the object exists before accessing.
- Use optional chaining (
?.).- Provide default values.
- Handle loading/error states.”
What it means: Import can't find the file/package.
Common causes:
- Typo in import path
- File doesn't exist
- Package not installed
- Incorrect relative path
AI prompting fix:
“Fix module not found error: Error: Cannot find module './components/UserCard'
Context:
- File structure: [describe].
- Trying to import from: [file].
- Target file location: [path].
Verify the file exists and fix the import path.”
What it means: Infinite recursion.
Common causes:
- Recursive function with no base case
- Circular dependency
- useEffect with missing dependency array
AI prompting fix:
“Fix infinite recursion: [paste error + code]
Find and fix:
- Missing base case in recursion.
- Circular dependencies.
useEffectcausing an infinite loop.”
When stuck, go through this systematically:
- Read the error message completely
- Check the stack trace for file:line
- Look at the code at that line
- Verify file exists and is where expected
- Check for typos in variable/function names
- Reproduce the error reliably
- Note what triggers it (specific action/input)
- Check browser console (F12)
- Check server logs
- Check network tab for failed requests
- Check database logs if applicable
- Create minimal reproduction
- Remove unrelated code
- Test with simple input
- Check if works in isolation
- Prepare complete error info
- Gather relevant code
- Note what you've tried
- Use Debug Template
Ask for help if:
- Stuck on same issue for >2 hours
- Tried 5+ different solutions, none worked
- Error makes no sense despite research
Ask for help if:
- Bug only in production, not reproducible locally
- Involves complex system interactions
- Related to infrastructure/deployment (outside codebase)
- Performance issue despite profiling
Ask for help if:
- Completely new technology/framework
- Security-critical implementation
- Database migration/architecture decision
- Legal/compliance concerns
- Project Documentation
- Check
docs/for answers. - Search commit history.
- Review architecture decisions.
- Check
- AI Assistance
- Use comprehensive debug prompts.
- Provide all context.
- Try different models.
- Search Online
- Google the error message.
- Check Stack Overflow.
- Review GitHub issues for libraries.
- Read official documentation.
- Community Help
- Project Discord/Slack.
- Stack Overflow (with MCVE).
- Reddit communities.
- Framework-specific forums.
- Professional Help
- Hire an expert for consultation.
- Use a code review service.
- Ask a mentor/senior developer.
"My code doesn't work"
- Do I have the error message? → Use Debug Template
- Is the prompt vague? → Use detailed template
- Is context too full? → Externalize to .md
- Am I using right model? → Check task complexity
"It's taking too long"
- Too many iterations? → Improve prompts
- Context management issues? → Use .md files
- Wrong workflow? → Use multi-step approach
- High costs? → Optimize tokens, use cheaper models
"AI behaves weirdly"
- Forgets things? → Check context usage
- Ignores constraints? → Emphasize constraints
- Over-engineers? → Add simplicity requirements
- Hallucinates? → Reference real code
Related Documentation:
- Prompting Guide → Improve prompt quality
- Context Management → Manage AI context
- Development Workflow → Integrated workflow
- Development Tools → Tool setup and usage
Back to: Main Guide