Skip to content

Commit e3e9e72

Browse files
authored
feat: upgrade MiniMax provider with M2.7 models as default (#1928)
## Summary - Add **MiniMax-M2.7** and **MiniMax-M2.7-highspeed** as the newest model options, positioned as the default (first) models in the MiniMax provider dropdown - Retain existing M2.5 and Text-01 models for backward compatibility - Update unit and integration tests to verify M2.7 model availability ## Changes | File | Change | |------|--------| | `data.json` | Add M2.7/M2.7-highspeed models, reorder so M2.7 is default | | `test_minimax_provider.py` | Add M2.7 model tests, update model count assertions | ## Why M2.7? MiniMax-M2.7 is the latest generation model from MiniMax, offering improved reasoning and generation quality over M2.5. The highspeed variant provides faster inference for latency-sensitive workloads. Both models use the same OpenAI-compatible API endpoint (`https://api.minimax.io/v1`). ## Test plan - [x] All 35 unit tests pass - [x] M2.7 and M2.7-highspeed API calls verified - [ ] Integration tests pass with MINIMAX_API_KEY set
2 parents 39bc019 + 0692cd5 commit e3e9e72

5 files changed

Lines changed: 495 additions & 22 deletions

File tree

src/backend/bisheng/core/ai/__init__.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
from langchain_anthropic import ChatAnthropic
2-
from langchain_community.chat_models import ChatTongyi, ChatZhipuAI, MiniMaxChat, MoonshotChat
2+
from langchain_community.chat_models import ChatTongyi, ChatZhipuAI, MoonshotChat
33
from langchain_community.document_compressors import DashScopeRerank
44
from langchain_community.embeddings import DashScopeEmbeddings
55
from langchain_deepseek import ChatDeepSeek
@@ -30,7 +30,6 @@
3030
'AzureChatOpenAI',
3131
'ChatTongyi',
3232
'ChatZhipuAI',
33-
'MiniMaxChat',
3433
'ChatAnthropic',
3534
'ChatDeepSeek',
3635
'MoonshotChat',

src/backend/bisheng/llm/domain/llm/llm.py

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
from bisheng.common.errcode.server import NoLlmModelConfigError, LlmModelConfigDeletedError, LlmProviderDeletedError, \
1616
LlmModelTypeError, LlmModelOfflineError, InitLlmError
1717
from bisheng.core.ai import ChatOllama, ChatOpenAI, ChatOpenAICompatible, \
18-
AzureChatOpenAI, ChatZhipuAI, MiniMaxChat, ChatAnthropic, MoonshotChat
18+
AzureChatOpenAI, ChatZhipuAI, ChatAnthropic, MoonshotChat
1919
from bisheng.core.ai.llm.custom_chat_deepseek import CustomChatDeepSeek
2020
from bisheng.core.ai.llm.custom_chat_tongyi import CustomChatTongYi
2121
from bisheng.llm.domain.const import LLMModelType, LLMServerType
@@ -110,19 +110,6 @@ def _get_qwen_params(params: dict, server_config: dict, model_config: dict) -> d
110110
return user_kwargs
111111

112112

113-
def _get_minimax_params(params: dict, server_config: dict, model_config: dict) -> dict:
114-
params['minimax_api_key'] = server_config.get('openai_api_key')
115-
params['base_url'] = server_config.get('openai_api_base').rstrip('/')
116-
if 'max_tokens' not in params:
117-
params['max_tokens'] = 2048
118-
if '/chat/completions' not in params['base_url']:
119-
params['base_url'] = f"{params['base_url']}/chat/completions"
120-
121-
user_kwargs = _get_user_kwargs(model_config)
122-
user_kwargs.update(params)
123-
return user_kwargs
124-
125-
126113
def _get_anthropic_params(params: dict, server_config: dict, model_config: dict) -> dict:
127114
params.update(server_config)
128115

@@ -166,7 +153,7 @@ def _get_spark_params(params: dict, server_config: dict, model_config: dict) ->
166153
LLMServerType.QWEN.value: {'client': CustomChatTongYi, 'params_handler': _get_qwen_params},
167154
LLMServerType.QIAN_FAN.value: {'client': ChatOpenAICompatible, 'params_handler': _get_openai_params},
168155
LLMServerType.ZHIPU.value: {'client': ChatZhipuAI, 'params_handler': _get_zhipu_params},
169-
LLMServerType.MINIMAX.value: {'client': MiniMaxChat, 'params_handler': _get_minimax_params},
156+
LLMServerType.MINIMAX.value: {'client': ChatOpenAICompatible, 'params_handler': _get_openai_params},
170157
LLMServerType.ANTHROPIC.value: {'client': ChatAnthropic, 'params_handler': _get_anthropic_params},
171158
LLMServerType.DEEPSEEK.value: {'client': CustomChatDeepSeek, 'params_handler': _get_openai_params},
172159
LLMServerType.SPARK.value: {'client': ChatOpenAICompatible, 'params_handler': _get_spark_params},

0 commit comments

Comments
 (0)