Instead of showing raw metrics like this:
AI-Assisted Commits: 234
Solo Commits: 156
Velocity with AI: 2.34 commits/hour
Velocity solo: 1.89 commits/hour
We now show actionable insights like this:
═══════════════════════════════════════════════════
📊 YOUR PRODUCTIVITY ANALYSIS
═══════════════════════════════════════════════════
🎯 Overall Productivity Score: 67/100 (Grade: C+)
Breakdown:
• AI Effectiveness: 78/100 (40% weight)
• Shell Efficiency: 58/100 (30% weight)
• Workflow Quality: 64/100 (30% weight)
🤖 AI Impact on Productivity
• AI-Assisted Commits: 234 (60.0%)
• Velocity Improvement: +23.8%
• Code Volume: 12,456 lines with AI (67.3%)
• Copy-Paste Incidents: 12
🐚 Shell Command Analysis
• Total Commands: 8,945
• Failure Rate: 26.8%
• Time Wasted: 42.3 hours
• Struggle Sessions: 89
• Productivity Score: 58/100
🔗 Workflow Correlation Analysis
• Full Cycle Workflows: 67 (Struggle → AI → Commit)
• AI Helpfulness Rate: 75.3%
• Shell → AI: 89 instances
• AI → Commit: 145 instances
═══════════════════════════════════════════════════
🎯 ACTIONABLE RECOMMENDATIONS
═══════════════════════════════════════════════════
🟠 HIGH - Shell Efficiency
Issue: High command failure rate: 26.8%
Action: Use Ctrl+R for history, create aliases, ask AI earlier
Impact: Save ~42.3 hours/month
🟠 HIGH - Code Quality
Issue: Detected 12 copy-paste incidents from Claude
Action: Take time to understand code before committing. Ask Claude to explain complex parts.
Impact: Reduce bugs by 27%, improve code understanding
🟡 MEDIUM - Workflow
Issue: Detected 89 struggle sessions (multiple retries)
Action: Ask Claude earlier when stuck. Average 4+ retries before AI help - ask sooner!
Impact: Reduce frustration, solve problems 3x faster
🟡 MEDIUM - AI Effectiveness
Issue: AI only resolves 75.3% of struggles
Action: Provide more context when asking Claude. Include error messages, relevant code, and what you've tried.
Impact: Increase AI success rate to 90%+
═══════════════════════════════════════════════════
✓ Full analysis saved: /tmp/comprehensive-analytics.json
What it analyzes:
- Command success/failure rates
- Time wasted on failed commands
- Struggle sessions (3+ retries on related commands)
- Common error patterns
- Productivity scoring (0-100)
Key insights:
- "You have a 26.8% command failure rate - that's wasting 42 hours!"
- "You struggle with build failures (89 sessions detected)"
- "Most failed command: npm install (234 failures)"
What it tracks:
- Shell Error → Claude Help → Commit patterns
- Time to resolution with/without AI
- AI helpfulness rate (% of struggles resolved)
- 6 workflow pattern types
Key insights:
- "You ask Claude after 4 failed attempts on average - ask sooner!"
- "AI solves 75% of your struggle sessions"
- "Git conflicts take 12 minutes with AI vs 45 minutes without"
What it does:
- Integrates AI Impact + Shell Analytics + Workflow Correlation
- Generates actionable recommendations with priority
- Calculates overall productivity score (0-100 with A+ to F grades)
Scoring breakdown:
- 40% AI Effectiveness - How well you use AI (velocity + code quality)
- 30% Shell Efficiency - How productive you are in the terminal
- 30% Workflow Quality - How well your tools work together
# Standard backup with comprehensive analytics
./target/release/vibedev backup \
--output ~/backups \
--include-git \
--include-history \
--analyze-impact
# This will:
# 1. Backup AI logs (Claude, Cursor, etc.)
# 2. Backup git commit history from all repos
# 3. Backup shell history (sanitized from API keys)
# 4. Generate comprehensive productivity analysis
# 5. Show human-readable recommendations
# Just the analytics (no backup)
./target/release/vibedev backup \
--analyze-impact \
--include-git \
--include-history \
--output /tmp- ✅ Correlates Claude conversations with git commits
- ✅ Measures velocity improvement (commits/hour with vs without AI)
- ✅ Detects collaboration patterns
- ✅ Identifies copy-paste incidents (suspiciously fast commits)
- ✅ Tracks learning curve over time
- ✅ Parses shell history (.bash_history, .zsh_history)
- ✅ Detects failed commands using error patterns
- ✅ Identifies struggle sessions (multiple retries)
- ✅ Calculates time wasted on errors
- ✅ Productivity scoring
- ✅ Detects Shell → AI → Commit patterns
- ✅ Measures AI helpfulness (% of struggles resolved)
- ✅ Tracks time to resolution
- ✅ Identifies workflow anti-patterns
Recommendations are priority-ranked and actionable:
| Priority | Emoji | When Used |
|---|---|---|
| 🔴 CRITICAL | Red | Productivity score < 60, severe issues |
| 🟠 HIGH | Orange | Failure rate > 20%, copy-paste > 20 incidents |
| 🟡 MEDIUM | Yellow | Struggle sessions > 50, AI helpfulness < 50% |
| 🟢 LOW | Green | Positive reinforcement, velocity > 30% |
Each recommendation includes:
- Issue - What's wrong (with metrics)
- Action - Specific steps to improve
- Impact - Expected improvement (hours saved, % increase)
After running with --analyze-impact, you get:
~/backups/
comprehensive-analytics-20260105-085500.json # Full JSON report
ai-logs-20260105-085500.tar.gz # AI tool logs
git-logs-20260105-085500.tar.gz # Git commit history
shell-history-20260105-085500.tar.gz # Sanitized shell history
✅ Module imports added to src/main.rs
✅ Comprehensive analytics integrated in backup command
✅ Human-readable output with emojis and formatting
✅ Actionable recommendations with priority ranking
✅ Productivity scoring with A-F grades
✅ Compilation successful (all errors fixed)
⏳ Testing - Ready for real-world use
Data sources:
- Claude conversations (
~/.claude/projects/*/history.jsonl) - Git commits (
git log --numstatfrom all repos in home dir) - Shell history (
~/.bash_history,~/.zsh_history, etc.)
Analysis pipeline:
- AIImpactAnalyzer loads Claude + Git data
- ShellAnalyzer parses shell history
- WorkflowAnalyzer correlates across tools
- ComprehensiveAnalyticsEngine generates recommendations
- Output displayed with colored, formatted text
Privacy:
- Shell history is sanitized (20+ regex patterns for API keys, passwords, etc.)
- All data stays local (no network calls)
- JSON reports can be reviewed before sharing