|
2 | 2 |
|
3 | 3 | <!--- generated changelog ---> |
4 | 4 |
|
| 5 | +## [2025-12-02] |
| 6 | + |
| 7 | +### llama-index-agent-azure [0.2.1] |
| 8 | + |
| 9 | +- fix: Pin azure-ai-projects version to prevent breaking changes ([#20255](https://github.com/run-llama/llama_index/pull/20255)) |
| 10 | + |
| 11 | +### llama-index-core [0.14.9] |
| 12 | + |
| 13 | +- MultiModalVectorStoreIndex now returns a multi-modal ContextChatEngine. ([#20265](https://github.com/run-llama/llama_index/pull/20265)) |
| 14 | +- Ingestion to vector store now ensures that \_node-content is readable ([#20266](https://github.com/run-llama/llama_index/pull/20266)) |
| 15 | +- fix: ensure context is copied with async utils run_async ([#20286](https://github.com/run-llama/llama_index/pull/20286)) |
| 16 | +- fix(memory): ensure first message in queue is always a user message after flush ([#20310](https://github.com/run-llama/llama_index/pull/20310)) |
| 17 | + |
| 18 | +### llama-index-embeddings-bedrock [0.7.2] |
| 19 | + |
| 20 | +- feat(embeddings-bedrock): Add support for Amazon Bedrock Application Inference Profiles ([#20267](https://github.com/run-llama/llama_index/pull/20267)) |
| 21 | +- fix:(embeddings-bedrock) correct extraction of provider from model_name ([#20295](https://github.com/run-llama/llama_index/pull/20295)) |
| 22 | +- Bump version of bedrock-embedding ([#20304](https://github.com/run-llama/llama_index/pull/20304)) |
| 23 | + |
| 24 | +### llama-index-embeddings-voyageai [0.5.1] |
| 25 | + |
| 26 | +- VoyageAI correction and documentation ([#20251](https://github.com/run-llama/llama_index/pull/20251)) |
| 27 | + |
| 28 | +### llama-index-llms-anthropic [0.10.3] |
| 29 | + |
| 30 | +- feat: add anthropic opus 4.5 ([#20306](https://github.com/run-llama/llama_index/pull/20306)) |
| 31 | + |
| 32 | +### llama-index-llms-bedrock-converse [0.12.2] |
| 33 | + |
| 34 | +- fix(bedrock-converse): Only use guardrail_stream_processing_mode in streaming functions ([#20289](https://github.com/run-llama/llama_index/pull/20289)) |
| 35 | +- feat: add anthropic opus 4.5 ([#20306](https://github.com/run-llama/llama_index/pull/20306)) |
| 36 | +- feat(bedrock-converse): Additional support for Claude Opus 4.5 ([#20317](https://github.com/run-llama/llama_index/pull/20317)) |
| 37 | + |
| 38 | +### llama-index-llms-google-genai [0.7.4] |
| 39 | + |
| 40 | +- Fix gemini-3 support and gemini function call support ([#20315](https://github.com/run-llama/llama_index/pull/20315)) |
| 41 | + |
| 42 | +### llama-index-llms-helicone [0.1.1] |
| 43 | + |
| 44 | +- update helicone docs + examples ([#20208](https://github.com/run-llama/llama_index/pull/20208)) |
| 45 | + |
| 46 | +### llama-index-llms-openai [0.6.10] |
| 47 | + |
| 48 | +- Smallest Nit ([#20252](https://github.com/run-llama/llama_index/pull/20252)) |
| 49 | +- Feat: Add gpt-5.1-chat model support ([#20311](https://github.com/run-llama/llama_index/pull/20311)) |
| 50 | + |
| 51 | +### llama-index-llms-ovhcloud [0.1.0] |
| 52 | + |
| 53 | +- Add OVHcloud AI Endpoints provider ([#20288](https://github.com/run-llama/llama_index/pull/20288)) |
| 54 | + |
| 55 | +### llama-index-llms-siliconflow [0.4.2] |
| 56 | + |
| 57 | +- [Bugfix] None check on content in delta in siliconflow LLM ([#20327](https://github.com/run-llama/llama_index/pull/20327)) |
| 58 | + |
| 59 | +### llama-index-node-parser-docling [0.4.2] |
| 60 | + |
| 61 | +- Relax docling Python constraints ([#20322](https://github.com/run-llama/llama_index/pull/20322)) |
| 62 | + |
| 63 | +### llama-index-packs-resume-screener [0.9.3] |
| 64 | + |
| 65 | +- feat: Update pypdf to latest version ([#20285](https://github.com/run-llama/llama_index/pull/20285)) |
| 66 | + |
| 67 | +### llama-index-postprocessor-voyageai-rerank [0.4.1] |
| 68 | + |
| 69 | +- VoyageAI correction and documentation ([#20251](https://github.com/run-llama/llama_index/pull/20251)) |
| 70 | + |
| 71 | +### llama-index-protocols-ag-ui [0.2.3] |
| 72 | + |
| 73 | +- fix: correct order of ag-ui events to avoid event conflicts ([#20296](https://github.com/run-llama/llama_index/pull/20296)) |
| 74 | + |
| 75 | +### llama-index-readers-confluence [0.6.0] |
| 76 | + |
| 77 | +- Refactor Confluence integration: Update license to MIT, remove requirements.txt, and implement HtmlTextParser for HTML to Markdown conversion. Update dependencies and tests accordingly. ([#20262](https://github.com/run-llama/llama_index/pull/20262)) |
| 78 | + |
| 79 | +### llama-index-readers-docling [0.4.2] |
| 80 | + |
| 81 | +- Relax docling Python constraints ([#20322](https://github.com/run-llama/llama_index/pull/20322)) |
| 82 | + |
| 83 | +### llama-index-readers-file [0.5.5] |
| 84 | + |
| 85 | +- feat: Update pypdf to latest version ([#20285](https://github.com/run-llama/llama_index/pull/20285)) |
| 86 | + |
| 87 | +### llama-index-readers-reddit [0.4.1] |
| 88 | + |
| 89 | +- Fix typo in README.md for Reddit integration ([#20283](https://github.com/run-llama/llama_index/pull/20283)) |
| 90 | + |
| 91 | +### llama-index-storage-chat-store-postgres [0.3.2] |
| 92 | + |
| 93 | +- [FIX] Postgres ChatStore automatically prefix table name with "data\_" ([#20241](https://github.com/run-llama/llama_index/pull/20241)) |
| 94 | + |
| 95 | +### llama-index-vector-stores-azureaisearch [0.4.4] |
| 96 | + |
| 97 | +- `vector-azureaisearch`: check if user agent already in policy before add it to azure client ([#20243](https://github.com/run-llama/llama_index/pull/20243)) |
| 98 | +- fix(azureaisearch): Add close/aclose methods to fix unclosed client session warnings ([#20309](https://github.com/run-llama/llama_index/pull/20309)) |
| 99 | + |
| 100 | +### llama-index-vector-stores-milvus [0.9.4] |
| 101 | + |
| 102 | +- Fix/consistency level param for milvus ([#20268](https://github.com/run-llama/llama_index/pull/20268)) |
| 103 | + |
| 104 | +### llama-index-vector-stores-postgres [0.7.2] |
| 105 | + |
| 106 | +- Fix postgresql dispose ([#20312](https://github.com/run-llama/llama_index/pull/20312)) |
| 107 | + |
| 108 | +### llama-index-vector-stores-qdrant [0.9.0] |
| 109 | + |
| 110 | +- fix: Update qdrant-client version constraints ([#20280](https://github.com/run-llama/llama_index/pull/20280)) |
| 111 | +- Feat: update Qdrant client to 1.16.0 ([#20287](https://github.com/run-llama/llama_index/pull/20287)) |
| 112 | + |
| 113 | +### llama-index-vector-stores-vertexaivectorsearch [0.3.2] |
| 114 | + |
| 115 | +- fix: update blob path in batch_update_index ([#20281](https://github.com/run-llama/llama_index/pull/20281)) |
| 116 | + |
| 117 | +### llama-index-voice-agents-openai [0.2.2] |
| 118 | + |
| 119 | +- Smallest Nit ([#20252](https://github.com/run-llama/llama_index/pull/20252)) |
| 120 | + |
5 | 121 | ## [2025-11-10] |
6 | 122 |
|
7 | 123 | ### llama-index-core [0.14.8] |
|
0 commit comments