Skip to content

fix: handle /v1-suffixed BASE_URL and improve error reporting for OpenAI-compat providers#181

Open
octo-patch wants to merge 1 commit intoThe-Pocket:mainfrom
octo-patch:fix/issue-170-openai-provider-url-and-error-handling
Open

fix: handle /v1-suffixed BASE_URL and improve error reporting for OpenAI-compat providers#181
octo-patch wants to merge 1 commit intoThe-Pocket:mainfrom
octo-patch:fix/issue-170-openai-provider-url-and-error-handling

Conversation

@octo-patch
Copy link
Copy Markdown

Fixes #170

Problem

Two bugs in _call_llm_provider affect users who configure non-Gemini providers (xAI, Ollama, OpenRouter):

1. Double /v1/ in the constructed URL

When XAI_BASE_URL (or any provider's BASE_URL) already ends with /v1 — as OpenRouter and the xAI API both recommend — the code appended another /v1/chat/completions, producing an invalid URL like:

https://openrouter.ai/api/v1/v1/chat/completions  ❌

This caused the API to return an empty or non-JSON response body, which then triggered the second bug.

2. response.json() called before raise_for_status()

Because response.json() was called before raise_for_status(), an HTTP error whose body happened to be empty (or plain-text HTML) raised a cryptic JSONDecodeError (caught as RequestException) instead of a clear HTTP error message:

Exception: An error occurred while making the request to XAI: Expecting value: line 1 column 1 (char 0)

3. README documents wrong env var name

The README instructed users to set XAI_URL, but the code reads XAI_BASE_URL, so users following the README could never configure a non-Gemini provider.

Solution

  • URL construction: detect when BASE_URL already ends with /v1 and append only /chat/completions in that case, so both https://api.x.ai and https://api.x.ai/v1 work as expected.
  • Error handling: parse JSON best-effort first (for logging), then call raise_for_status(). When the body is non-JSON, log the raw text and raise a descriptive exception that mentions the relevant env var.
  • README: rename XAI_URLXAI_BASE_URL, clarify /v1 handling, and add an OpenRouter example.

Testing

Manually verified the URL-construction logic for all four input patterns:

BASE_URL value Resulting URL
http://localhost:11434 http://localhost:11434/v1/chat/completions
http://localhost:11434/v1 http://localhost:11434/v1/chat/completions
https://api.x.ai https://api.x.ai/v1/chat/completions
https://openrouter.ai/api/v1 https://openrouter.ai/api/v1/chat/completions

…nAI-compat providers (fixes The-Pocket#170)

Two bugs in _call_llm_provider:

1. URL double-/v1: when XAI_BASE_URL (or any provider's BASE_URL) already
   ends with /v1 (e.g. https://openrouter.ai/api/v1), the code appended
   another /v1/chat/completions, producing an invalid URL. The fix checks
   for a trailing /v1 and omits the extra prefix.

2. JSON-before-raise_for_status: response.json() was called before
   raise_for_status(), so an HTTP error with a non-JSON (e.g. empty) body
   caused a confusing JSONDecodeError instead of a clear HTTP error message.
   The fix parses JSON first (best-effort, for logging), then calls
   raise_for_status(), and surfaces the raw response text when JSON is absent.

Also corrects the README env var name from XAI_URL to XAI_BASE_URL and
adds examples showing that both https://api.x.ai and https://api.x.ai/v1
are accepted as BASE_URL values.

Co-Authored-By: Octopus <liyuan851277048@icloud.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Exception: An error occurred while making the request to XAI: Expecting value: line 1 column 1 (char 0)

1 participant