Skip to content

Commit 31e32b5

Browse files
authored
[fix]remove reasoning_max_tokens=max_toksns*0.8 in sampling_params (#4294)
* [fix]Modify follow-up push parameters and Modify the verification method for thinking length (#4086) * 续推参数 generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式 * 续推参数 generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式 * 续推参数 generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式 * 续推参数 generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式 * add completion_token_ids * add logger * fix reasoning_max_tokens ParameterError * add unittest * add unittest * add unittest * add unittest * add unittest * add unit test * fix * [fix]update apply_chat_template (#4137) * update apply_chat_template * fix unittest * fix unittest * fix * fix * fix unit test * fix * fix unit test * add unit test * fix reasoning_max_tokens
1 parent aebe12a commit 31e32b5

1 file changed

Lines changed: 0 additions & 2 deletions

File tree

fastdeploy/engine/sampling_params.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,8 +159,6 @@ def from_optional(
159159
def __post_init__(self):
160160
if self.seed is None:
161161
self.seed = random.randint(0, 922337203685477580)
162-
if self.max_tokens is not None and self.reasoning_max_tokens is None:
163-
self.reasoning_max_tokens = max(int(self.max_tokens * 0.8), 1)
164162
self._verify_args()
165163

166164
def _verify_args(self) -> None:

0 commit comments

Comments
 (0)