Prompt Optimizer now supports configuring unlimited number of custom models, allowing you to use multiple local models or self-hosted API services simultaneously.
- ✅ Support unlimited number of custom models
- ✅ Automatic discovery and registration via environment variables
- ✅ Friendly model name display
- ✅ Fully backward compatible with existing configurations
- ✅ Support all deployment methods (Web, Desktop, Docker, MCP)
Use the following format to configure multiple custom models:
VITE_CUSTOM_API_KEY_<suffix>=your-api-key # Required
VITE_CUSTOM_API_BASE_URL_<suffix>=your-base-url # Required
VITE_CUSTOM_API_MODEL_<suffix>=your-model-name # Required
VITE_CUSTOM_API_PARAMS_<suffix>=json-object-string # Optional extra request parameters- Suffix: Only letters (a-z, A-Z), numbers (0-9), underscores (_), hyphens (-), maximum 50 characters
- API_KEY: Required for API authentication
- BASE_URL: Required, API service base URL
- MODEL: Required, specific model name
- PARAMS: Optional JSON object string injected into the final request body
# Qwen 2.5 Model
VITE_CUSTOM_API_KEY_qwen25=ollama-dummy-key
VITE_CUSTOM_API_BASE_URL_qwen25=http://localhost:11434/v1
VITE_CUSTOM_API_MODEL_qwen25=qwen2.5:7b
# Qwen 3 Model
VITE_CUSTOM_API_KEY_qwen3=ollama-dummy-key
VITE_CUSTOM_API_BASE_URL_qwen3=http://localhost:11434/v1
VITE_CUSTOM_API_MODEL_qwen3=qwen3:8b# Claude API
VITE_CUSTOM_API_KEY_claude=sk-ant-your-claude-key
VITE_CUSTOM_API_BASE_URL_claude=https://api.anthropic.com/v1
VITE_CUSTOM_API_MODEL_claude=claude-3-sonnet-20240229
VITE_CUSTOM_API_PARAMS_claude={"temperature":0.3,"top_p":0.8}
# Custom OpenAI Compatible Service
VITE_CUSTOM_API_KEY_custom=your-custom-api-key
VITE_CUSTOM_API_BASE_URL_custom=https://api.example.com/v1
VITE_CUSTOM_API_MODEL_custom=custom-model-name
VITE_CUSTOM_API_PARAMS_custom={"temperature":0.7,"top_p":0.9,"max_tokens":4096}# Local model
VITE_CUSTOM_API_KEY_local=dummy-key
VITE_CUSTOM_API_BASE_URL_local=http://localhost:11434/v1
VITE_CUSTOM_API_MODEL_local=llama2:7b
# Cloud service
VITE_CUSTOM_API_KEY_cloud=real-api-key
VITE_CUSTOM_API_BASE_URL_cloud=https://api.service.com/v1
VITE_CUSTOM_API_MODEL_cloud=gpt-4-turbo
# Development environment
VITE_CUSTOM_API_KEY_dev=dev-api-key
VITE_CUSTOM_API_BASE_URL_dev=https://dev-api.example.com/v1
VITE_CUSTOM_API_MODEL_dev=dev-model
VITE_CUSTOM_API_PARAMS_dev={"temperature":0.4}VITE_CUSTOM_API_PARAMS_<suffix> is useful when you need to:
- set standard OpenAI-compatible fields such as
temperature,top_p, ormax_tokens - pass vendor-specific payload fields such as NVIDIA NIM's
chat_template_kwargs - define stable defaults in Docker runtime configuration instead of re-entering them in the UI
Example JSON payload:
{
"chat_template_kwargs": {
"enable_thinking": true
},
"temperature": 0.6,
"top_p": 0.95,
"max_tokens": 16384
}Notes:
PARAMSmust be a JSON object stringmodel,messages, andstreamare reserved and will be ignored automaticallytimeoutis allowed and can be used to override request timeout behavior- for complex Docker Compose values, wrap the entire JSON string in single quotes
Create .env.local file in project root:
# Basic models
VITE_OPENAI_API_KEY=your-openai-key
VITE_GEMINI_API_KEY=your-gemini-key
# Custom models
VITE_CUSTOM_API_KEY_ollama=dummy-key
VITE_CUSTOM_API_BASE_URL_ollama=http://localhost:11434/v1
VITE_CUSTOM_API_MODEL_ollama=qwen2.5:7b
VITE_CUSTOM_API_PARAMS_ollama={"temperature":0.7}docker run -d -p 8081:80 \
-e VITE_OPENAI_API_KEY=your-openai-key \
-e VITE_CUSTOM_API_KEY_ollama=dummy-key \
-e VITE_CUSTOM_API_BASE_URL_ollama=http://host.docker.internal:11434/v1 \
-e VITE_CUSTOM_API_MODEL_ollama=qwen2.5:7b \
-e 'VITE_CUSTOM_API_PARAMS_ollama={"temperature":0.7}' \
-e VITE_CUSTOM_API_KEY_claude=your-claude-key \
-e VITE_CUSTOM_API_BASE_URL_claude=https://api.anthropic.com/v1 \
-e VITE_CUSTOM_API_MODEL_claude=claude-3-sonnet \
-e 'VITE_CUSTOM_API_PARAMS_claude={"temperature":0.3,"top_p":0.8}' \
--restart unless-stopped \
--name prompt-optimizer \
linshen/prompt-optimizerCreate .env file:
VITE_OPENAI_API_KEY=your-openai-key
VITE_CUSTOM_API_KEY_ollama=dummy-key
VITE_CUSTOM_API_BASE_URL_ollama=http://host.docker.internal:11434/v1
VITE_CUSTOM_API_MODEL_ollama=qwen2.5:7b
VITE_CUSTOM_API_PARAMS_ollama={"temperature":0.7}
VITE_CUSTOM_API_KEY_qwen3=your-qwen3-key
VITE_CUSTOM_API_BASE_URL_qwen3=http://host.docker.internal:11434/v1
VITE_CUSTOM_API_MODEL_qwen3=qwen3:8b
VITE_CUSTOM_API_PARAMS_qwen3={"temperature":0.6,"top_p":0.95}Run with environment file:
docker run -d -p 8081:80 --env-file .env \
--restart unless-stopped \
--name prompt-optimizer \
linshen/prompt-optimizerModify docker-compose.yml to add env_file configuration:
services:
prompt-optimizer:
image: linshen/prompt-optimizer:latest
env_file:
- .env # Read environment variables from .env file
ports:
- "8081:80"
restart: unless-stoppedThen configure variables in .env file (same as Method 2).
Desktop version automatically reads environment variables from system or .env.local file.
MCP server supports all custom model configurations and automatically maps environment variables.
The system automatically converts suffix names to friendly display names:
| Suffix | Display Name |
|---|---|
qwen25 |
Qwen25 |
claude_local |
Claude Local |
my_model_v2 |
My Model V2 |
test123 |
Test123 |
Recommended:
ollama- Local Ollama serviceclaude- Claude APIqwen25- Qwen 2.5 modellocal_llama- Local Llama modeldev_model- Development environment model
Not Recommended:
model.v1- Contains dotsmy model- Contains spacestest@api- Contains special characters
The system automatically validates configurations:
- Suffix Format Check: Only allows valid characters
- Required Fields Check: Ensures all three environment variables are present
- URL Format Check: Validates BASE_URL format
- Conflict Detection: Prevents conflicts with built-in model names
- PARAMS Shape Check: Accepts only JSON objects for extra request parameters
- Incomplete Configuration: Automatically ignored, doesn't affect other models
- Invalid Suffix: Configuration skipped with warning log
- Duplicate Suffix: Later configuration overwrites earlier one
- Network Issues: Individual model failures don't affect system stability
- Invalid PARAMS JSON: Extra parameters are ignored, but the model remains available
A: Check the following:
- All three environment variables configured correctly
- Suffix name follows naming rules
- No conflicts with built-in model names
- Application restarted after configuration changes
A: Verify:
- BASE_URL is accessible
- API_KEY is valid
- MODEL name exists in the service
- Network connectivity is normal
A: Check browser console or application logs for:
[scanCustomModelEnvVars] Found X valid custom models: [model1, model2, ...]
[generateDynamicModels] Generated model: custom_modelname (Display Name)
If you configured PARAMS, inspect the outgoing request payload in browser DevTools to verify the extra fields are present.
- Configuration Caching: Configurations are cached at startup, restart required for changes
- Validation Optimization: Single-point validation reduces redundant checks by 66%
- Dynamic Loading: Models are loaded on-demand to improve startup performance
A: Theoretically unlimited, but recommend reasonable configuration based on actual needs to avoid UI clutter.
A: Remove corresponding environment variables and restart the application.
A: Yes, custom models support all features including prompt optimization, comparison testing, etc.
A: Use different suffixes for different environments:
# Production
VITE_CUSTOM_API_KEY_prod=prod-key
VITE_CUSTOM_API_BASE_URL_prod=https://prod-api.com/v1
VITE_CUSTOM_API_MODEL_prod=prod-model
# Development
VITE_CUSTOM_API_KEY_dev=dev-key
VITE_CUSTOM_API_BASE_URL_dev=https://dev-api.com/v1
VITE_CUSTOM_API_MODEL_dev=dev-model- Model key format:
custom_<suffix> - Configuration validation: Automatic checks for suffix format, API key, baseURL, etc.
- Error tolerance: Individual configuration errors don't affect other models
- Default values: Reasonable defaults ensure system stability
-
v1.2.6: Code quality fixes and performance optimization
- Fixed MCP Server case conversion bug for more accurate environment variable mapping
- Optimized configuration validation logic with 66% performance improvement
- Resolved ValidationResult interface conflicts, improved type safety
- Implemented dynamic static model key retrieval with automatic synchronization
- All fixes thoroughly tested to ensure cross-environment consistency
-
v1.4.0: Added multiple custom models support
- Fully backward compatible with existing configurations
- Support all deployment methods
- Added configuration validation and error handling