Skip to content

docs: address 'why not just use llama.cpp?' feedback#38

Merged
unamedkr merged 1 commit intomainfrom
docs/why-not-llama-cpp
Apr 10, 2026
Merged

docs: address 'why not just use llama.cpp?' feedback#38
unamedkr merged 1 commit intomainfrom
docs/why-not-llama-cpp

Conversation

@unamedkr
Copy link
Copy Markdown
Collaborator

Reddit feedback from u/Eyelbee (5 upvotes): "It is very easy to ship embedded apps with llama.cpp. Don't really understand the point of yours."

Feature tables don't convince. Added scenario-based differentiation:

  • Side-by-side build commands (cc app.c -lm vs cmake+link)
  • Concrete use cases (WASM 192KB, microcontrollers, game engines, teaching)
  • Explicit llama.cpp recommendation for GPU speed + model coverage
  • Applied to both EN and KO READMEs

🤖 Generated with Claude Code

Reddit feedback (Eyelbee, 5 upvotes): feature tables don't convince
users who know llama.cpp. Added scenario-based comparison showing
where the single-header approach matters in practice:

- WASM: 192 KB vs GGML tensor graph too large
- Microcontroller: #include only option (no FS, no linker)
- Game engines: one .h vs 250K LOC build integration
- Teaching: readable in an afternoon

Includes side-by-side build commands (cc app.c -lm vs cmake + link).
Explicitly recommends llama.cpp for GPU speed and model coverage.
Applied to both EN and KO READMEs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@unamedkr unamedkr merged commit e0ae945 into main Apr 10, 2026
@unamedkr unamedkr deleted the docs/why-not-llama-cpp branch April 10, 2026 14:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant