-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support api_endpoint for ollama #15
support api_endpoint for ollama #15
Conversation
This is really awesome thank you! What do you think about explicitly adding a |
I think Adding separate chat_endpoint and generate_endpoint for Ollama isn't necessary for now because once Ollama is running, it provides both chat and generate functionalities at /api/chat and /api/generate. But it is Fine to have 2 config options, providing better consistency across providers |
If you can make the suggested changes and resolve the merge conflict I will merge it in! Thank you! |
Sorry, I misunderstood your suggestion. The suggested changes have been committed. |
Don't apologize you are totally fine. I think we are having some issues communicating. If you scroll to the top of the page, you will see I have added some suggestions for changes. You can click commit on those directly, or you can add them yourself. Please verify they work and I will merge the PR. Thank you for your contributions! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checked, work at my env, ican set up with this config
generate example
{
"lsp-ai.serverConfiguration": {
"memory": {
"file_store": {}
},
"models": {
"model1": {
"type": "ollama",
"completions_endpoint": "http://remote:11434/api/generate",
"model": "deepseek-coder",
}
}
},
}
chat example
{
"lsp-ai.serverConfiguration": {
"memory": {
"file_store": {}
},
"models": {
"model1": {
"type": "ollama",
"chat_endpoint": "http://localhost:11434/api/chat",
"model": "deepseek-coder",
}
}
},
"lsp-ai.generationConfiguration": {
"model": "model1",
"parameters": {
"messages": [
{
"role": "system",
"content": "Instructions:"
}
]
}
},
}
i didn't find comment in "Files changed" tab, are you talking about "explicitly adding a chat_endpoint and a generate_endpoint in the config options for Ollama"?
Do you mean when only chat_endpoint is set, lsp-ai should use chat completions instead of generate completions?
Both cases
Looks pretty good! You should be able to change:
To:
|
my bad, it's done, never heard of deref before |
No worries! Awesome work, I'm going to test locally and merge it tonight! Thanks for contributing! |
No description provided.