-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Completion on available Ollama models support for quarkus.langchain4j.ollama.chat-model.model-id
property value
#985
Conversation
07c85d8
to
fa24e22
Compare
...uarkus.ls/src/main/java/com/redhat/quarkus/extensions/ollama/OllamaItemMetadataProvider.java
Outdated
Show resolved
Hide resolved
...uarkus.ls/src/main/java/com/redhat/quarkus/extensions/ollama/OllamaItemMetadataProvider.java
Outdated
Show resolved
Hide resolved
ea8466c
to
4b5acce
Compare
If ollama is not running, then the list of models cannot be collected. There is an error logged to the lsp4mp output. I think it's okay to not list the models, but I think we should prevent the error from being logged:
|
Gotta update my ollama; I'm on |
I did that to understand some trouble with ollama. Perhaps a better thing to do is to report this error as LSP diagnostic to notify to the user that it cannot connect to Ollama? @fbricon @datho7561 what do you think about this idea? |
That's too bad -( Perhaps we should use api instead https://github.com/ollama/ollama/blob/main/docs/api.md ? @fbricon what do you think about that? |
Yeah /api/tags will cover more ollama versions. |
I'd rather avoid adding annoying diags. I'd say we ignore this problem for the time being. We might revisit this decision over time, depending on user feedback. |
I updated to the latest ollama. The PR works very well! The default ollama installation method on Linux sets ollama up as a background service. As a result, the user is unlikely to run into the error I ran into. (oops meant to sent this a while ago) |
It means that I need to switch to /api/tags, right? |
Ok let's give up diags, but should hide the error from stacktrace like @datho7561 reported? |
Thanks!
You mean using v1 instead of api/tags?
|
No, I just needed to reinstall ollama. In order to install ollama, I ran the script The first time I installed ollama I didn't follow their instructions, I just downloaded the binary. |
@datho7561 I updated my PR:
Here null, is the e.getMessage()
|
...uarkus.ls/src/main/java/com/redhat/quarkus/extensions/ollama/OllamaItemMetadataProvider.java
Outdated
Show resolved
Hide resolved
...uarkus.ls/src/main/java/com/redhat/quarkus/extensions/ollama/OllamaItemMetadataProvider.java
Outdated
Show resolved
Hide resolved
please add missing copyright headers |
@datho7561 I did several refactoring, please retry it. |
fixed |
...uarkus.ls/src/main/java/com/redhat/quarkus/extensions/ollama/OllamaItemMetadataProvider.java
Outdated
Show resolved
Hide resolved
...om.redhat.quarkus.ls/src/main/java/com/redhat/quarkus/extensions/ollama/OllamaConstants.java
Show resolved
Hide resolved
fe7d331
to
c412727
Compare
quarkus.langchain4j.ollama.chat-model.model-id property value Signed-off-by: azerr <azerr@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works well for me, and the code looks good. Thanks, Angelo!
test failures seem to be because lsp4mp hasn't built a snapshot yet |
Thanks so much @fbricon @datho7561 for your great review! |
quarkus.langchain4j.ollama.chat-model.model-id
property value
feat: Available Ollama models support for
quarkus.langchain4j.ollama.chat-model.model-id
property valueThis PR requires eclipse/lsp4mp#460
Those models are retrieved by consuming the Ollama API
${base-url}/v1/models
where base-url equals to:quarkus.langchain4j.ollama.base-url
property in the application.properties.Here the result in vscode with completion:
Here the result with validtaion:
As the error message is generic, it is not very good, LSP4MP must be improved to contribute with custom error validation message.
The Ollama models are cached, so if you add a new model in Ollama, you need to close the file and reopen it to refresh the cache (not tested but I think it should work).