-
-
Notifications
You must be signed in to change notification settings - Fork 11.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
💄 style: add qwen vision model & update qwen2.5 72b to 128k for siliconcloud #4380
Conversation
@LovelyGuYiMeng is attempting to deploy a commit to the LobeChat Community Team on Vercel. A member of the Team first needs to authorize it. |
Thank you for raising your pull request and contributing to our Community |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4380 +/- ##
=======================================
Coverage 92.21% 92.22%
=======================================
Files 493 493
Lines 35390 35432 +42
Branches 2304 2305 +1
=======================================
+ Hits 32634 32676 +42
Misses 2756 2756
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
https://docs.siliconflow.cn/features/function_calling#3 硅基的 LLAMA 暂时并不支持 Function Call... 可以拿下面的脚本测试下
|
但是llama上面有tool的tag标签 |
Check it out later |
嗯 但是实际不支持 |
问了下客服,明确近几日支持,先加上也无碍 |
这边还有个免费的8b模型没发布,等发布后一起加上,现在先加千问的 |
There is also a free 8b model that has not been released yet. I will add it after it is released. Now I will add Qianwen’s model. |
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
❤️ Great PR @LovelyGuYiMeng ❤️ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
### [Version 1.22.7](v1.22.6...v1.22.7) <sup>Released on **2024-10-17**</sup> #### 💄 Styles - **misc**: Add qwen vision model & update qwen2.5 72b to 128k for siliconcloud. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### Styles * **misc**: Add qwen vision model & update qwen2.5 72b to 128k for siliconcloud, closes [#4380](#4380) ([e8c009b](e8c009b)) </details> <div align="right"> [![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top) </div>
🎉 This PR is included in version 1.22.7 🎉 The release is available on: Your semantic-release bot 📦🚀 |
### [Version 1.65.3](v1.65.2...v1.65.3) <sup>Released on **2024-10-17**</sup> #### 💄 Styles - **misc**: Add qwen vision model & update qwen2.5 72b to 128k for siliconcloud. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### Styles * **misc**: Add qwen vision model & update qwen2.5 72b to 128k for siliconcloud, closes [lobehub#4380](https://github.com/bentwnghk/lobe-chat/issues/4380) ([e8c009b](e8c009b)) </details> <div align="right"> [![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top) </div>
* 'main' of https://github.com/lobehub/lobe-chat: 🔖 chore(release): v1.22.7 [skip ci] 💄 style: add qwen vision model & update qwen2.5 72b to 128k for siliconcloud (lobehub#4380)
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
增加Qwen2 VL 72B视觉模型
增加书生系列视觉模型
更新Qwen2.5 72B上下文至128K
为Llama3.1支持函数调用(暂时还未支持,以后再加)
留言给Arvin:
由于HF的Llama3.1模型ID与硅基的Llama3.1模型ID相同,并且HF优先级高于硅基
导致硅基的Llama3.1即使添加了functioncall,也视为不支持函数调用模型
由于模型ID冲突且HF平台拥有大量的模型,让用户自己添加模型是为一种更好的选择
故删除HF中的Llama模型
📝 补充信息 | Additional Information