-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local models? #2
Comments
That's on our todo list ;) |
👍 Thanks! Will wait for that. |
It only supports GPT-V for now. We plan to incorporate more models in the future. |
This comment was marked as off-topic.
This comment was marked as off-topic.
I tried to edit config file: for url http://127.0.0.1:11434/v1/chat/completions |
Hi @calamity10110 , the current framework does not support non-openai model. We are working on it and will release a new feature for this soon. |
I am a non-subscriber of openai, can I still use UFO? I follow your instructions and my config file as following,I set the model to gpt3.5 |
Well, I am waiting for when local models can be used. |
@FinnT730 @Justin-12138 You can now use models in Ollama for your local model deployment (in the pre-release branch). Please read https://github.com/microsoft/UFO/blob/pre-release/model_worker/README.md for details, and expect worse performance than GPT-V. |
@calamity10110 You can now use models in Ollama for Llava deployment (in the pre-release branch). Please read https://github.com/microsoft/UFO/blob/pre-release/model_worker/README.md for details, and expect worse performance than GPT-V. |
Thanks for the update!
Have a good day, and thanks for working on this feature :)
…On Wed, 10 Apr 2024, 14:09 vyokky, ***@***.***> wrote:
@FinnT730 <https://github.com/FinnT730> @Justin-12138
<https://github.com/Justin-12138> You can now use models in Ollama for
your local model deployment (in the pre-release branch). Please read
https://github.com/microsoft/UFO/blob/pre-release/model_worker/README.md
for details, and expect worse performance than GPT-V.
—
Reply to this email directly, view it on GitHub
<#2 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABYZB4VRGUHKKQFKRWHWUM3Y4UTXZAVCNFSM6AAAAABDJJ5YCCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBXGM3TOMRTGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@zsb87 It appears that your local model or API is refusing to respond. Usually this is because the model has limited functionality. Can you tell me your model version? |
@Mac0q . This is my model version: |
@zsb87 I think llava:7b is still weak for this task, we will try it optimize the prompt to make it doable, but GPT-4V is for sure the best choice. |
Will local models be supported one day as well?
(Unless they are, and I didn't find it in the readme XD)
The text was updated successfully, but these errors were encountered: