fix: handle /v1-suffixed BASE_URL and improve error reporting for OpenAI-compat providers#181
Open
octo-patch wants to merge 1 commit intoThe-Pocket:mainfrom
Conversation
…nAI-compat providers (fixes The-Pocket#170) Two bugs in _call_llm_provider: 1. URL double-/v1: when XAI_BASE_URL (or any provider's BASE_URL) already ends with /v1 (e.g. https://openrouter.ai/api/v1), the code appended another /v1/chat/completions, producing an invalid URL. The fix checks for a trailing /v1 and omits the extra prefix. 2. JSON-before-raise_for_status: response.json() was called before raise_for_status(), so an HTTP error with a non-JSON (e.g. empty) body caused a confusing JSONDecodeError instead of a clear HTTP error message. The fix parses JSON first (best-effort, for logging), then calls raise_for_status(), and surfaces the raw response text when JSON is absent. Also corrects the README env var name from XAI_URL to XAI_BASE_URL and adds examples showing that both https://api.x.ai and https://api.x.ai/v1 are accepted as BASE_URL values. Co-Authored-By: Octopus <liyuan851277048@icloud.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #170
Problem
Two bugs in
_call_llm_provideraffect users who configure non-Gemini providers (xAI, Ollama, OpenRouter):1. Double
/v1/in the constructed URLWhen
XAI_BASE_URL(or any provider'sBASE_URL) already ends with/v1— as OpenRouter and the xAI API both recommend — the code appended another/v1/chat/completions, producing an invalid URL like:This caused the API to return an empty or non-JSON response body, which then triggered the second bug.
2.
response.json()called beforeraise_for_status()Because
response.json()was called beforeraise_for_status(), an HTTP error whose body happened to be empty (or plain-text HTML) raised a crypticJSONDecodeError(caught asRequestException) instead of a clear HTTP error message:3. README documents wrong env var name
The README instructed users to set
XAI_URL, but the code readsXAI_BASE_URL, so users following the README could never configure a non-Gemini provider.Solution
BASE_URLalready ends with/v1and append only/chat/completionsin that case, so bothhttps://api.x.aiandhttps://api.x.ai/v1work as expected.raise_for_status(). When the body is non-JSON, log the raw text and raise a descriptive exception that mentions the relevant env var.XAI_URL→XAI_BASE_URL, clarify/v1handling, and add an OpenRouter example.Testing
Manually verified the URL-construction logic for all four input patterns:
BASE_URLvaluehttp://localhost:11434http://localhost:11434/v1/chat/completions✅http://localhost:11434/v1http://localhost:11434/v1/chat/completions✅https://api.x.aihttps://api.x.ai/v1/chat/completions✅https://openrouter.ai/api/v1https://openrouter.ai/api/v1/chat/completions✅