- Recorder: crash hardening (headersSent guards, clientDisconnected tracking), preserve content alongside toolCalls, Cohere v2 native detection, tool-call ID extraction from 5 providers, reasoning/thinking extraction from 4 providers, multi-block text join (filter+join instead of find), thinking-only and empty-content response handling, Ollama /api/generate format detection, streaming collapse reasoning propagation.
- Bedrock/Converse: ContentWithToolCallsResponse support, ResponseOverrides wired into all non-streaming and streaming builders, Converse-wrapped stream event format, text_delta type field on text deltas, proper error envelope on Converse errors, webSearches warnings.
- Cohere v2: reasoning in all builders + streaming, webSearches warnings, response_format forwarding, assistant tool_calls preservation, full ResponseOverrides (finish_reason, usage, id) in non-streaming and streaming paths.
- Server: readBody 10MB size limit, control API error detail, one-shot error fixture race fix, normalizeCompatPath clarity, fixtures_loaded gauge updates on mutations.
- Competitive matrix: HTML pipeline fixed (computeChanges, applyChanges, updateProviderCounts, extractFeatures all aligned with actual DOM structure).
- CI workflows: --auto merge (respects branch protection), Slack secrets via env vars, script injection prevention in notify-pr.yml, portable grep.
- Router: RegExp g-flag lastIndex reset prevents alternating match/no-match.
- Jest/Vitest: save/restore pre-existing env vars in afterAll, loadFixtures console.warn on failure.
- Gemini: tool_call_id collision fix (shared callCounter), thought-part filtering.
- Ollama: ContentWithToolCallsResponse support, default stream:true, field validation.
- Removed the
preinstall: npx only-allow pnpmscript from the published package. That hook was intended to guide contributors cloning the monorepo toward pnpm, but it got bundled into the published tarball and fired duringnpm install -g @copilotkit/aimock, aborting the install withsh: only-allow: not foundbefore npm could even resolve the binary. The guard belongs on the monorepo root, not inside the shipped package. Unblocks CopilotKit's docs CI (and any other consumer installing aimock via npm).
--proxy-onlymode now accepts URL-only--fixturessources without requiring a local filesystem path. Previously the first--fixturesvalue was always checked as a record-destination base path, which rejected all-URL invocations even though proxy-only mode doesn't write recordings to disk. The check now fires only for--recordmode where a writable destination is actually required. Same fix applied to the parallel--agui-proxy-onlyCLI path. Unblocks the showcase-aimock Railway service which runs aimock in proxy-only mode with remote GitHub raw fixture URLs and no local fallback.
--fixturesnow acceptshttps://andhttp://URLs to JSON fixture files in addition to filesystem paths. Fetches at boot, parses, and registers the remote fixture as if loaded from disk. On-disk cache at~/.cache/aimock/fixtures/<sha256-of-url>/(honoring$XDG_CACHE_HOME) provides resilience against transient upstream failures: with--validate-on-load, a fetch failure with a valid cached copy logs a warning and continues; without a cache, the process exits non-zero. HTTP fetch has a hard-coded 10s timeout and a 50 MB body size cap (enforced incrementally so a lyingContent-Lengthcannot bypass it). Onlyhttps://andhttp://schemes are accepted —file://,ftp://, etc. are rejected with a clear error. The flag is now repeatable; multiple sources are loaded and concatenated. Tarball (.tar.gz) and zip URL support intentionally deferred to a future release.- Private-address denylist for remote
--fixturesURLs: fetches to loopback (127.0.0.0/8,::1), link-local (169.254.0.0/16,fe80::/10), RFC1918 (10/8,172.16/12,192.168/16), CGNAT (100.64/10), cloud-metadata (169.254.169.254), ULA (fc00::/7), multicast, and other reserved ranges are rejected with a clear fail-loud error. Hostnames are resolved and every returned address is checked. SetAIMOCK_ALLOW_PRIVATE_URLS=1to opt out (required for local dev / tests that target127.0.0.1). - HTTP redirects are rejected (fail-loud) for remote
--fixturesURLs to prevent scheme-bypass (a 3xxLocation:pointing atfile://orjavascript:would otherwise sidestep the scheme gate and SSRF denylist). Configure the upstream to serve the final URL directly — GitHub raw content URLs already do this.
- README DevX: Quick Start sets
OPENAI_BASE_URL+OPENAI_API_KEYbefore SDK construction with an inline ordering warning; Docker one-liner uses absolute$(pwd)/fixtures:/fixturespath;LLMockclass name asymmetry after the v1.7.0 package rename is explained inline; Multimedia and Protocol-Mock feature bullets now link to each individual feature page. - Fixtures page: Vertex AI added to Provider Support Matrix; Ollama Reasoning marked as supported (was incorrectly "—" since v1.8.0);
finishReasonResponses-API mapping fully documented;toolNamescope clarified; shadowing-warning format matches actual validator output; Azure-inherits-OpenAI override support footnoted. - Record & Replay page: Docker examples use absolute
$(pwd)paths; Rustasync-openaiexample corrected toClient::with_config(OpenAIConfig::new().with_api_base(...))form;enableRecording({ proxyOnly: true })disambiguated; pseudocode annotated as simplified;enableRecordingexample includesmock.stop()cleanup; stale 2025 timestamp replaced with generic placeholder. - Sidebar: TOC id-assignment now runs unconditionally (previously skipped on pages with fewer than 4 headings, silently breaking cross-page anchor links to short pages).
- Historical CHANGELOG: v1.14.1 Railway-specific language scrubbed; v1.14.2
--journal-max=-1rejection andcreateServer()default flip annotated with BREAKING / BEHAVIOR CHANGE markers; all 15 historical version entries standardized on Keep-a-Changelog categories (Added/Changed/Fixed/Removed) instead of mixed Changesets-style. - package.json:
engines.noderaised to>=24.0.0to match OIDC publish requirement;preinstall: only-allow pnpmguard added; deprecated@google/generative-aiswapped for@google/genai;filesincludesCHANGELOG.md;repository.urlcanonicalized;typesVersionsgains.d.ctsentries; optionalpeerDependenciesforvitest/jestadded;prepare: husky || truetightened tohusky;releasescript gainspnpm test && pnpm lintpre-check.
- Stray
package-lock.json— repo is pnpm-only, now enforced viapreinstall.
- Recorder no longer buffers SSE (
text/event-stream) upstream responses before relaying to the client.proxyAndRecordaccumulated all upstream chunks and replayed them via a singleres.end(), collapsing multi-frame streams into one client-visible write and breaking progressive rendering for downstream consumers (notably showcase--proxy-onlydeployments). SSE responses now stream chunk-by-chunk to the client while still being tee'd into the recording buffer; non-SSE behavior is unchanged.
- Multi-turn conversations documentation page covering the tool-round idiom, matching semantics across turns, and how to author/record multi-turn fixtures.
- Matching Semantics section on the Fixtures page documenting last-message-only matching, first-wins file order, substring-vs-exact matching, and shadowing warnings.
- Recording guidance for multi-turn conversations on the Record & Replay page.
- CLI Flags table on the Record & Replay page expanded to cover
-f/--fixtures,--journal-max,--fixture-counts-max,--agui-*,--chaos-*,--watch,-p,-h,--log-level,--validate-on-load. - README note clarifying that the
llmockCLI bin is a legacy alias pointing at a narrower flag-driven CLI without--configorconvertsupport.
- Docker examples in the Record & Replay guide no longer prefix
npx @copilotkit/aimockbefore the image ENTRYPOINT (the four snippets would have failed with strict parseArgs rejecting positional args). - Auth Header Forwarding documentation now reflects the strip-list behavior that has been in place since v1.6.1 (all headers forwarded except hop-by-hop and client-set).
requestTransformexample fixture key no longer carries an undocumented load-bearing trailing space.- Completed the Claude model-id migration (v1.14.3) for the remaining test fixtures that still referenced
claude-sonnet-4-20250514. - README LLM Providers count and migration-page comparisons restored to the "11+" form with accurate enumeration (OpenAI Chat / Responses / Realtime, Claude, Gemini REST / Live WS, Azure, Bedrock, Vertex AI, Ollama, Cohere). The earlier "8" collapse was incorrect: competitors count endpoint/protocol variants separately, and "8" undersold aimock's actual coverage. Provider Support Matrix on the Fixtures page gains a dedicated Vertex AI column.
- Corrected
toolCallIdmatching semantics on the Fixtures page to describe the "lastrole: "tool"message" rule fromrouter.ts(not "last message being a tool"). - Added
-h 0.0.0.0to every Docker example in the README and Record & Replay page so the default127.0.0.1host bind doesn't silently break-pport mapping when user args override the image CMD. - Extended the Docker host-bind fix across all migration guides, tutorials, and the Docker/aimock-cli/metrics/chaos-testing pages — every Docker example that passes user args now includes
-h 0.0.0.0sodocker -pport mapping works. - Updated
--journal-maxdefault wording on the Record & Replay page to reflect post-v1.14.2 behavior (finite1000cap for bothserveandcreateServer(); only directnew Journal()instantiation remains unbounded). - Stripped redundant
npx @copilotkit/aimock/aimockprefixes from Docker examples in migration pages (mokksy, vidaimock, mock-llm, piyook, openai-responses); all were silently broken under strict parseArgs because the prefix became a positional arg to the image'snode dist/cli.jsentrypoint. - Replaced
--configDocker examples acrossdocs/aimock-cli,docs/metrics,docs/chaos-testing, and migration guides with flag-driven Docker equivalents or explicit npx/local-install notes (the published image's ENTRYPOINT runs thellmockCLI which does not support--config). - Synchronized LLM provider counts across all migration pages to the "11+" form with accurate variant-level enumeration, restoring competitor-equivalent counting (e.g. VidaiMock "11+", Mokksy "11 vs 5").
- Corrected the
sequenceIndexgotcha on/multi-turn—validateFixturesdoes not factorsequenceIndex,toolCallId,model, orpredicateinto the duplicate-userMessagewarning; the warning is advisory when a runtime differentiator is present. - Fixed the Programmatic Recording example on
/record-replayto stop contradicting itself by pairingproxyOnly: truewithfixturePath; now shows record mode and proxy-only mode as two distinct examples. - Reconciled provider-count phrasing across migration pages — mock-llm lead paragraph no longer says "9 more providers", enumerated lists no longer trail the count with "and OpenAI-compatible providers" / "and more". Aligned the
validateFixturesshadowing wording between the Fixtures and Multi-Turn pages (both now correctly describe the warning as advisory when a runtime differentiator is present). - Replaced broken
class="cmt"CSS class with correctclass="cm"acrossdocs/cohere,docs/test-plugins,docs/vertex-ai,docs/ollama,docs/record-replay, anddocs/chaos-testingcode blocks (21 occurrences) —.cmtis not defined indocs/style.css, so these code-block comments were rendering as default text instead of the dimmed comment color.
- Microsoft Agent Framework (MAF) integration guide with Python and .NET examples.
- Generic
.code-tabslanguage switcher with cross-section sync and localStorage persistence.
- Updated Claude model references from
claude-sonnet-4-20250514(retiring 2026-06-15) toclaude-sonnet-4-6.
BREAKING — CLI flag parsing:
--journal-max=-1(and--fixture-counts-max=-1) no longer silently maps to "unbounded"; it is now rejected with a clear error. Migration: drop the flag entirely, or pass--journal-max=0/--fixture-counts-max=0if you intended unbounded retention.⚠ BEHAVIOR CHANGE (should have been MINOR per SemVer) —
createServer()programmatic defaults forjournalMaxEntriesandfixtureCountsMaxTestIdsflipped from unbounded to finite caps (1000 / 500). Auto-update consumers on long-running embedders: review your retention assumptions and opt in to unbounded explicitly by passing0if that was the prior relied-upon behavior. Released as a PATCH; in retrospect this warranted a MINOR bump.
Journal.getFixtureMatchCount()is now read-only: calling it with an unknown testId no longer inserts an empty map or triggers FIFO eviction of a live testId. Reads never mutate cache state.- CLI rejects negative values for
--journal-maxand--fixture-counts-maxwith a clear error (previously silently treated as unbounded). Breaking for anyone passing-1expecting unbounded — see note above.
createServer()programmatic default:journalMaxEntriesandfixtureCountsMaxTestIdsnow default to finite caps (1000 / 500) instead of unbounded. Long-running embedders that relied on unbounded retention must now opt in explicitly by passing0. Back-compat with test harnesses usingnew Journal()directly is preserved (they still default to unbounded). Note: this is a behavior change that in retrospect warranted a MINOR bump rather than PATCH.
- New
--fixture-counts-max <n>CLI flag (default 500) to cap the fixture-match-counts map by testId.
- Cap in-memory journal (and fixture-match-counts map) to prevent heap OOM under sustained load.
Journal.entrieswas unbounded, causing heap growth ~3.8MB/sec to 4GB → OOM in ~18 minutes on long-running production deployments. Default cap for CLI (serve) is now 1000 entries; programmaticcreateServer()remains unbounded by default (back-compat). See--journal-maxflag.
- Response template merging — override
id,created,model,usage,finishReason,role,systemFingerprinton fixture responses across all 4 provider formats (OpenAI, Claude, Gemini, Responses API) (#111) - JSON auto-stringify — fixture
argumentsandcontentfields accept objects that are auto-stringified by the loader, eliminating escaped JSON pain (#111) - Migration guide from openai-responses-python (#111)
- All fixture examples and docs converted to object syntax (#111)
ResponseOverridesfield validation invalidateFixtures— catches invalid types forid,created,model,usage,finishReason,role,systemFingerprint
onTranscriptiondocs now show correct 1-argument signaturevalidateFixturesnow recognizes ContentWithToolCalls and multimedia response types
- GitHub Action for one-line CI setup —
uses: CopilotKit/aimock@v1with fixtures, config, port, args, and health check (#102) - Fixture converters wired into the CLI —
npx @copilotkit/aimock convert vidaimockandnpx @copilotkit/aimock convert mockllmas first-class subcommands (#102) - 30 npm keywords for search discoverability (#102)
- Fixture gallery with 11 examples covering all mock types, plus browsable docs page at /examples (#102)
- Vitest and jest plugins for zero-config testing —
import { useAimock } from "@copilotkit/aimock/vitest"(#102)
- Strip video URLs from README for npm publishing (#102)
- Multimedia endpoint support: image generation (OpenAI DALL-E + Gemini Imagen), text-to-speech, audio transcription, and video generation with async polling (#101)
match.endpointfield for fixture isolation — prevents cross-matching between chat, image, speech, transcription, video, and embedding fixtures (#101)- Bidirectional endpoint filtering — generic fixtures only match compatible endpoint types (#101)
- Convenience methods:
onImage,onSpeech,onTranscription,onVideo(#101) - Record & replay for all multimedia endpoints — proxy to real APIs, save fixtures with correct format/type detection (#101)
_endpointTypeexplicit field onChatCompletionRequestfor type safety (#101)- Comparison matrix and drift detection rules updated for multimedia (#101)
- 54 new tests (32 integration, 11 record/replay, 12 type/routing)
AGUIMock— mock the AG-UI (Agent-to-UI) protocol for CopilotKit frontend testing. All 33 event types, 11 convenience builders, fluent registration API, SSE streaming with disconnect handling (#100)- AG-UI record & replay with tee streaming — proxy to real AG-UI agents, record event streams as fixtures, replay on subsequent requests. Includes
--proxy-onlymode for demos (#100) - AG-UI schema drift detection — compares aimock event types against canonical
@ag-ui/coreZod schemas to catch protocol changes (#100) --agui-record,--agui-upstream,--agui-proxy-onlyCLI flags (#100)
- Section bar from docs pages (cleanup)
--proxy-onlyflag — proxy unmatched requests to upstream providers without saving fixtures to disk or caching in memory. Every unmatched request always hits the real provider, preventing stale recorded responses in demo/live environments (#99)
- Per-test sequence isolation via
X-Test-Idheader — each test gets its own fixture match counters, wired through all 12 HTTP handlers and 3 WebSocket handlers. No more test pollution from shared sequential state (#93) - Combined
content + toolCallsin fixture responses — newContentWithToolCallsResponsetype and type guard, supported across OpenAI Chat, OpenAI Responses, Anthropic Messages, and Gemini, with stream collapse support (#92) - OpenRouter
reasoning_contentsupport in chat completions (#88) - Demo video in README (#91)
- CI: Slack notifications for drift tests, competitive matrix updates, and new PRs (#86)
- Docs: reasoning and webSearches rows in Response Types table
web_search_callitems now useaction.querymatching real OpenAI API format (#89)- Homepage URL cleaned up (remove
/index.htmlsuffix) (#90) - Record & Replay section title now centered and terminal panel top-aligned (#87)
- CI: use
pull_request_targetfor fork PR Slack alerts
requestTransformoption for deterministic matching and recording — normalizes requests before matching (strips timestamps, UUIDs, session IDs) and switches to exact equality when set. Applied across all 15 provider handlers and the recorder. (#79, based on design by @iskhakovt in #63)- Reasoning/thinking support for OpenAI Chat Completions —
reasoningfield in fixtures generatesreasoning_contentin responses and streamingreasoningdeltas (#62 by @erezcor) - Reasoning support for Gemini (
thoughtParts), AWS Bedrock InvokeModel + Converse (thinkingblocks), and Ollama (thinktags) (#81) - Web search result events for OpenAI Responses API (#62)
- Open Graph image and meta tags for social sharing
- CI:
npmenvironment to release workflow for deployment tracking;workflow_dispatchadded to Python test workflow
- Updated all GitHub repo URLs from CopilotKit/llmock to CopilotKit/aimock
- Reframed drift detection docs for users ("your mocks never go stale") with restored drift report output
- Migration page examples: replaced fragile
time.sleepwith health check loops against/__aimock/health; fixed Python npx examplestderr=subprocess.PIPEdeadlock (#80) - Stream collapse now handles reasoning events correctly
- MCPMock — Model Context Protocol mock with tools, resources, prompts, session management
- A2AMock — Agent-to-Agent protocol mock with SSE streaming
- VectorMock — Pinecone, Qdrant, ChromaDB compatible vector DB mock
- Search (Tavily), rerank (Cohere), and moderation (OpenAI) service mocks
/__aimock/*control API for external fixture managementaimockCLI with JSON config file support- Mount composition for running multiple protocol handlers on one server
- JSON-RPC 2.0 transport with batch and notifications
aimock-pytestpip package for native Python testing- Converter scripts:
convert-vidaimock(Tera → JSON) andconvert-mockllm(YAML → JSON) - Drift automation skill updates —
fix-drift.tsnow updatesskills/write-fixtures/SKILL.mdalongside source fixes - Docker: dual-push
ghcr.io/copilotkit/aimock+ghcr.io/copilotkit/llmock(compat) - 6 migration guides: MSW, VidaiMock, mock-llm, piyook, Python mocks, Mokksy
- Docs: sidebar.js, cli-tabs.js, section bar, competitive matrix with 25 rows
- Renamed package from
@copilotkit/llmockto@copilotkit/aimock - Renamed Prometheus metrics to
aimock_*with new MCP/A2A/Vector counters - Rebranded logger
[aimock], chaos headersx-aimock-chaos-*, CLI startup message - Helm chart renamed to
charts/aimock/ - Homepage redesigned (Treatment 3: Progressive Disclosure)
- Record proxy now preserves upstream URL path prefixes — base URLs like
https://gateway.company.com/llmnow correctly resolve togateway.company.com/llm/v1/chat/completionsinstead of losing the/llmprefix (PR #57) - Record proxy now forwards all request headers to upstream, not just
Content-Typeand auth headers. Hop-by-hop headers (connection,keep-alive,transfer-encoding, etc.) and client-set headers (host,content-length,cookie,accept-encoding) are still stripped (PR #58) - Recorder now decodes base64-encoded embeddings when
encoding_format: "base64"is set in the request. Python's openai SDK uses this by default. Previously these were saved asproxy_errorfixtures (PR #64) - Guarded base64 embedding decode against corrupted data (non-float32-aligned buffers fall through gracefully instead of crashing)
--summaryflag on the competitive matrix update script for markdown-formatted change summaries
- Provider-specific endpoints: dedicated routes for Bedrock (
/model/{modelId}/invoke), Ollama (/api/chat,/api/generate), Cohere (/v2/chat), and Azure OpenAI deployment-based routing (/openai/deployments/{id}/chat/completions) - Chaos injection:
ChaosConfigtype withdrop,malformed, anddisconnectactions; supports per-fixture chaos viachaosconfig on each fixture and server-wide chaos via--chaos-drop,--chaos-malformed, and--chaos-disconnectCLI flags - Metrics:
GET /metricsendpoint exposing Prometheus text format with request counters and latency histograms per provider and route - Record-and-replay:
--recordflag andproxyAndRecordhelper that proxies requests to real LLM APIs, collapses streaming responses, and writes fixture JSON to disk for future playback
- Documentation URLs now use the correct domain (llmock.copilotkit.dev)
- Embeddings API:
POST /v1/embeddingsendpoint,onEmbedding()convenience method,inputTextmatch field,EmbeddingResponsetype, deterministic fallback embeddings from input hash, Azure embedding routing - Structured output / JSON mode:
responseFormatmatch field,onJsonOutput()convenience method - Sequential responses:
sequenceIndexmatch field for stateful multi-turn fixtures, per-fixture-group match counting,resetMatchCounts()method - Streaming physics:
StreamingProfiletype withttft,tps,jitterfields for realistic timing simulation - AWS Bedrock:
POST /model/{modelId}/invokeendpoint, Anthropic Messages format translation - Azure OpenAI: provider routing for
/openai/deployments/{id}/chat/completionsand/openai/deployments/{id}/embeddings - Health & models endpoints:
GET /health,GET /ready,GET /v1/models(auto-populated from fixtures) - Docker & Helm: Dockerfile, Helm chart for Kubernetes deployment
- Documentation website: full docs site at llmock.copilotkit.dev with feature pages and competitive comparison matrix
- Automated drift remediation:
scripts/drift-report-collector.tsandscripts/fix-drift.tsfor CI-driven drift fixes - CI automation: competitive matrix update workflow, drift fix workflow
FixtureOptsandEmbeddingFixtureOptstype aliases exported for external consumers.worktrees/to eslint ignores
- Default to non-streaming for Claude Messages API and Responses API (matching real API defaults)
- README rewritten as concise overview with links to docs site
- Write-fixtures skill updated for all v1.5.0 features
- Docs site: Get Started links to docs, comparison above reliability, npm version badge
- Gemini Live handler no longer crashes on malformed
clientContent.turnsandtoolResponse.functionResponses - Added
isClosedguard before WebSocket finalization events (prevents writes to closed connections) streamingProfilenow present on convenience method opts types (on,onMessage, etc.)- skills/ symlink direction corrected so
npm packincludes the write-fixtures skill .clauderemoved from package.json files (was dead weight — symlink doesn't ship)- Watcher cleanup on error (clear debounce timer, null guard)
- Empty-reload guard (keep previous fixtures when reload produces 0)
- Dead
@keyframes sseLineCSS from docs site
--watch(-w): File-watching with 500ms debounced reload. Keeps previous fixtures on validation failure.--log-level: Configurable log verbosity (silent,info,debug). Defaultinfofor CLI,silentfor programmatic API.--validate-on-load: Fixture schema validation at startup — checks response types, tool call JSON, numeric ranges, shadowing, and catch-all positioning.validateFixtures()exported for programmatic useLoggerclass exported for programmatic use
- WebSocket drift detection tests: TLS client for real provider WS endpoints, 4 verified drift tests (Responses WS + Realtime), Gemini Live canary for text-capable model availability
- Realtime model canary: detects when
gpt-4o-mini-realtime-previewis deprecated and suggests GA replacement - Gemini Live documented as unverified (no text-capable
bidiGenerateContentmodel exists yet)
- Responses WS handler now accepts flat
response.createformat matching the real OpenAI API (previously required a non-standard nestedresponse: { ... }envelope) - README Gemini Live response shape example corrected (
modelTurn.parts, notmodelTurnComplete)
- Live API drift detection test suite: three-layer triangulation between SDK types, real API responses, and llmock output across OpenAI (Chat + Responses), Anthropic Claude, and Google Gemini
- Weekly CI workflow for automated drift checks
DRIFT.mddocumentation for the drift detection system
- Missing
refusalfield on OpenAI Chat Completions responses — both the SDK and real API returnrefusal: nullon non-refusal messages, but llmock was omitting it
- Claude Code fixture authoring skill (
/write-fixtures) — comprehensive guide for match fields, response types, agent loop patterns, gotchas, and debugging - Claude Code plugin structure for downstream consumers (
--plugin-dir,--add-dir, or manual copy)
- README and docs site updated with Claude Code integration instructions
- Mid-stream interruption:
truncateAfterChunksanddisconnectAfterMsfixture fields to simulate abrupt server disconnects - AbortSignal-based cancellation primitives (
createInterruptionSignal, signal-awaredelay()) - Backward-compatible
writeSSEStreamoverload withStreamOptionsreturning completion status - Interruption support across all HTTP SSE and WebSocket streaming paths
destroy()method onWebSocketConnectionfor abrupt disconnect simulation- Journal records
interruptedandinterruptReasonon interrupted streams - LLMock convenience API extended with interruption options (
truncateAfterChunks,disconnectAfterMs)
- Zero-dependency RFC 6455 WebSocket framing layer
- OpenAI Responses API over WebSocket (
/v1/responses) - OpenAI Realtime API over WebSocket (
/v1/realtime) — text + tool calls - Gemini Live BidiGenerateContent over WebSocket — text + tool calls
- Future Direction section in README
- WebSocket close-frame lifecycle
- Improved error visibility across WebSocket handlers
- Function call IDs on Gemini tool call responses
- Changesets (simplified release workflow)
- 9948a8b:
prependFixture()andgetFixtures()public API methods
getTextContentfor array-format message content handling