-
-
Notifications
You must be signed in to change notification settings - Fork 12.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request] 希望适配o1-mini以及o1-preview模型 #3944
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
🥰 Description of requirementsSince the API interfaces of o1-mini and o1-preview models do not support streaming and the temperature must be 1, this API model cannot be called. We hope that the adaptation can be completed. 🧐 SolutionHope to adapt to o1-mini and o1-preview models 📝 Supplementary information |
@Xiaokai6880 你好!我是Dosu,我是一个可以帮助你解决错误、回答问题和成为贡献者的机器人。我会在等待人类维护者的过程中帮助你。 可以适配 以下是处理 private buildCompletionParamsByModel(payload: ChatStreamPayload) {
const { model, temperature, top_p, stream, messages, tools } = payload;
const isVisionModel = model.startsWith('qwen-vl');
const params = {
...payload,
messages,
result_format: 'message',
stream: !!tools?.length ? false : (stream ?? true),
temperature:
temperature === 0 || temperature >= 2 ? undefined : temperature === 1 ? 0.999 : temperature, // 'temperature' must be Float
top_p: top_p && top_p >= 1 ? 0.999 : top_p,
};
/* Qwen-vl models temporarily do not support parameters below. */
/* Notice: `top_p` imposes significant impact on the result,the default 1 or 0.999 is not a proper choice. */
return isVisionModel
? omit(
params,
'presence_penalty',
'frequency_penalty',
'temperature',
'result_format',
'top_p',
)
: omit(params, 'frequency_penalty');
} 此外, import OpenAI from 'openai';
import { ChatStreamPayload, ModelProvider } from '../types';
import { LobeOpenAICompatibleFactory } from '../utils/openaiCompatibleFactory';
export const LobeMoonshotAI = LobeOpenAICompatibleFactory({
baseURL: 'https://api.moonshot.cn/v1',
chatCompletion: {
handlePayload: (payload: ChatStreamPayload) => {
const { temperature, ...rest } = payload;
return {
...rest,
temperature:
temperature !== undefined
? temperature / 2
: undefined,
} as OpenAI.ChatCompletionCreateParamsStreaming;
},
},
debug: {
chatCompletion: () => process.env.DEBUG_MOONSHOT_CHAT_COMPLETION === '1',
},
provider: ModelProvider.Moonshot,
});
export const LobeBaichuanAI = LobeOpenAICompatibleFactory({
baseURL: 'https://api.baichuan-ai.com/v1',
chatCompletion: {
handlePayload: (payload: ChatStreamPayload) => {
const { temperature, ...rest } = payload;
return {
...rest,
// [baichuan] frequency_penalty must be between 1 and 2.
frequency_penalty: undefined,
temperature: temperature !== undefined ? temperature / 2 : undefined,
} as OpenAI.ChatCompletionCreateParamsStreaming;
},
},
debug: {
chatCompletion: () => process.env.DEBUG_BAICHUAN_CHAT_COMPLETION === '1',
},
provider: ModelProvider.Baichuan,
}); 确保 |
This issue is closed, If you have any questions, you can comment and reply. |
🎉 This issue has been resolved in version 1.17.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Can you please publish to the latest docker image? |
🥰 需求描述
由于o1-mini以及o1-preview模型的API接口不支持流式传输及temperature必须为1,无法调用此API模型,希望能够适配完成
🧐 解决方案
希望适配o1-mini以及o1-preview模型
📝 补充信息
The text was updated successfully, but these errors were encountered: