See also: docs/INSTALLATION.md for a concise quick-start guide.
CLIO runs completely on your local machine. You can:
- [OK] Use CLIO entirely offline with local AI models (llama.cpp, LM Studio, SAM)
- [OK] Optionally connect to cloud AI providers (GitHub Copilot, OpenAI, etc.)
- [OK] Switch between local and cloud providers at any time
You do NOT need the internet to use CLIO. But if you want cloud AI, CLIO makes it easy to connect.
# 1. Install
cd CLIO
sudo ./install.sh
# 2. Start CLIO
clio --new
# 3. Discover available AI providers
: /api providers
# 4. Pick a provider and follow its setup instructions
: /api providers openrouter # (or minimax, openai, github_copilot, etc.)
# 5. Start using CLIO!
: explain how to use CLIOThat's it! See sections below for detailed setup of each provider.
cd CLIO
sudo ./install.shThis installs CLIO to /opt/clio with a symlink at /usr/local/bin/clio.
cd CLIO
./install.sh --userThis installs to ~/.local/clio with symlink at ~/.local/bin/clio.
# Install to custom directory
sudo ./install.sh /usr/local/clio
# Install without creating symlink
sudo ./install.sh --no-symlink
# Create symlink at custom location
sudo ./install.sh --symlink /usr/bin/clio
# Show help
./install.sh --helpIf the automatic installer doesn't work:
-
Check Perl version (5.32+ required):
perl -v
-
Create config directory:
mkdir -p ~/.clio -
Set executable permissions:
chmod +x clio
-
Test CLIO:
./clio --help
Run this command to see all available providers:
clio --new
: /api providers| Provider | Setup | Notes |
|---|---|---|
| llama.cpp | Run llama.cpp server, then /api set provider llama.cpp |
Popular, many models available |
| LM Studio | Run LM Studio app, then /api set provider lmstudio |
GUI-based, easy model management |
| SAM | Run SAM server locally, then /api set provider sam |
Fast inference |
| Provider | Setup | Notes |
|---|---|---|
| GitHub Copilot | /api login then authorize in browser |
Recommended, integrated OAuth |
| OpenAI | /api set provider openai then /api set key <key> |
Popular, many models |
| Google Gemini | /api set provider google then /api set key <key> |
Large context models |
| DeepSeek | /api set provider deepseek then /api set key <key> |
Cost-effective |
| OpenRouter | /api set provider openrouter then /api set key <key> |
Access to many models |
1. Install and run llama.cpp server:
# Clone llama.cpp repo
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make
# Download a model (e.g., Mistral 7B)
# See: https://huggingface.co/models?search=gguf
# Run server (default port 8080)
./server -m your-model.gguf2. Configure CLIO:
clio --new
: /api set provider llama.cpp
: /api showDone! CLIO now uses your local llama.cpp model.
1. Install and run LM Studio:
- Download from https://lmstudio.ai
- Launch the app
- Load a model (it will download automatically)
- Start the local server (default port 1234)
2. Configure CLIO:
clio --new
: /api set provider lmstudio
: /api showDone! CLIO now uses LM Studio.
1. Get a GitHub Copilot subscription:
- Visit https://github.com/copilot
- Subscribe ($10/month or $100/year individual, $19/month business)
2. Configure CLIO:
clio --new
: /api login
# Browser opens -> Authorize -> Done!
: /api showDone! CLIO now uses GitHub Copilot.
1. Get an OpenAI API key:
- Visit https://platform.openai.com/account/api-keys
- Create new secret key
- Copy the key
2. Configure CLIO:
clio --new
: /api set provider openai
: /api set key sk-... (paste your key)
: /config save
: /api showDone! CLIO now uses OpenAI.
After setup, verify everything works:
clio --new
: /api show
# Test a simple question
: what is 2+2?
# If you get an AI response, you're all set!
: /exitCLIO makes it easy to switch providers:
clio --new
: /api set provider llama.cpp # Switch to local
: /api set provider openai # Switch to cloud
: /api show # Verify current providerEach provider keeps its own configuration (API keys, models, settings).
Cause: Perl can't find the library modules.
Solution:
# Option 1: Run from CLIO directory
cd CLIO && ./clio --new
# Option 2: Set PERL5LIB
export PERL5LIB=/path/to/CLIO/lib:$PERL5LIB
clio --newCause: Need sudo for system directories.
Solution:
sudo ./install.sh # System-wide install
# OR
./install.sh --user # User install (no sudo)Cause: Provider not properly configured or network issue.
Solution:
clio --new
: /api show # Check current config
: /api providers github_copilot # Get setup instructionsCause: Server not running or wrong port.
Solution:
- Verify llama.cpp server is running:
curl http://localhost:8080/health - Check port number in CLIO:
/api show - Restart llama.cpp server if needed
Cause: UTF-8 not enabled.
Solution:
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
# Add these to ~/.bashrc or ~/.zshrc to make permanentWorks out of the box. Perl 5.32+ is pre-installed.
Install Perl if needed:
# Ubuntu/Debian
sudo apt-get install perl
# Fedora/RHEL
sudo dnf install perl
# Arch
sudo pacman -S perlUse Windows Subsystem for Linux (WSL):
- Install WSL2
- Install Ubuntu 20.04 or later
- Follow Linux instructions above
Run CLIO in a container:
docker run -it --rm \
-v "$(pwd)":/workspace \
-v clio-auth:/root/.clio \
-w /workspace \
ghcr.io/syntheticautonomicmind/clio:latest \
--newAfter installation:
- Read the User Guide: See
docs/USER_GUIDE.mdfor full feature documentation - Try example commands: Start with
/api showand explore available commands - Customize appearance: Try
/style listand/theme listto personalize CLIO - Learn about tools: Ask CLIO: "What tools do you have available?"
Issues during installation?
- Check the troubleshooting section above
- Run with debug mode:
clio --debug --new - Search GitHub Issues
- Create a new issue with:
- Your OS and version
- Perl version (
perl --version) - Error messages
- Output of
clio --debug
Welcome to CLIO! Start creating with AI today.