-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama component cannot get models from Ollama server #2885
Comments
@nguyentran0212 Xin chào! I'm here to assist you with the issue you've encountered. I can help you solve bugs, answer questions, and even guide you on how to contribute. Let's work together to resolve this! To resolve the issue where the Ollama component in Langflow cannot retrieve models from the Ollama server, resulting in a 404 error if the Llama-2 model is not installed, follow these steps:
Here is the relevant part of the code that attempts to retrieve the models from the Ollama server: def get_model(self, url: str) -> list[str]:
try:
with httpx.Client() as client:
response = client.get(url)
response.raise_for_status()
data = response.json()
model_names = [model["name"] for model in data.get("models", [])]
return model_names
except Exception as e:
raise ValueError("Could not retrieve models. Please, make sure Ollama is running.") from e Additionally, the class OllamaEmbeddingsComponent(CustomComponent):
display_name: str = "Ollama Embeddings"
description: str = "Embeddings model from Ollama."
documentation = "https://python.langchain.com/docs/integrations/text_embedding/ollama"
beta = True
def build_config(self):
return {
"model": {
"display_name": "Ollama Model",
},
"base_url": {"display_name": "Ollama Base URL"},
"temperature": {"display_name": "Model Temperature"},
"code": {"show": False},
}
def build(
self,
model: str = "llama2",
base_url: str = "http://localhost:11434",
temperature: Optional[float] = None,
) -> Embeddings:
try:
output = OllamaEmbeddings(model=model, base_url=base_url, temperature=temperature) # type: ignore
except Exception as e:
raise ValueError("Could not connect to Ollama API.") from e
return output By ensuring that the Llama-2 model is installed and the Ollama server is properly configured, the component should be able to retrieve the models without encountering a 404 error [1][2]. |
I'm having the same issue It's not pulling the list of Ollma models. It worked in version 1.0.10, but something broke in versions 1.0.11 and 1.0.12. |
Hi @nguyentran0212 @NightRang3r , thanks for this submission. This was a bug I introduced in a recent PR. @saikolasani will merge the fix asap! |
Hi @nguyentran0212 and @NightRang3r, thanks again for the submission! The issue has been fixed and is being merged. It should be available in the next release. Feel free to raise an other issue if something pops up. |
Hi! I'm having the same issue right here. Can you help me finding the piece of code to change? Or maybe it is better to wait for the next release. Thank you |
Howdy folks, I can confirm that the error has been fixed in version v1.0.13 (I use the Docker image). Thanks @saikolasani and @jordanrfrazier for quick fix. Now I can finally learn to use this tool. |
@pedrobergaglio Are you still experiencing the same issue? |
@saikolasani, it's working already. Thank you! |
Hello Everyone, I have the same issue . But when I press to refresh button on the right side of model name. I got the message below. Error while updating the Component I checked All of updates. Install and uninstall all ollama models and langflow several times. And I checked every your solutions in this topic as well regards |
@sinanalmac Thanks for letting us know. Taking a look at this fix in #3497 |
Reopening the Issue: When tested the issue still persist when a leading / is in URL. |
I have the same issue when I use the newest docker version of LangFlow. |
@kaneyxx I had the same issue, this made me feel incredibly frustrated, but I found out - aside from any applicable issues / configurations with Langflow that are discussed above, which may or may not be related to your case, you should read the above discussions and determine that for your own situation - my problem was not with Langflow (which I updated to the latest version which had the above fixes applied) but with the networking feature of Docker:
Refer to "How to connect to a host service from inside a container" for details. Just posting this to help fellow users out. |
HI @kaneyxx and @tbui-isgn, |
I recently tested the Ollama LLM component in Langflow 1.0.18, as well as in Docker, and I’m happy to report that it’s working well. I’m considering creating a PR for the Ollama Embeddings component to improve its stability. Additionally, I agree with @tbui-isgn that when deploying in Docker or other environments, we should ensure the Ollama URL is handled correctly to avoid any issues. I’ve attached a video of the testing for reference. Screen.Recording.2024-09-17.at.12.01.35.PM.movScreen.Recording.2024-09-17.at.12.31.04.PM.mov |
Currently closing the issue. Please let us know if you face any further issues. |
could you please help to fix it? |
Same issue, tried everything. I have a client presentation tomorrow to extol virtues of local model usage but no chance now. |
Ran into the same thing. It looks like networking is not set up properly in the docker image. I did not have time to debug it, however as a workaround, using the default docker DNS should work. So, if you replace the BaseURL with http://host.docker.internal:11434 it should work - at least it did for me. |
+1 |
Is there any update on this ? I just installed langflow and having the same issues with Ollama. |
followed the instructions: Clone the LangFlow repository: git clone https://github.com/langflow-ai/langflow.git Navigate to the docker_example directory: cd langflow/docker_example Run the Docker Compose file: docker compose up LangFlow will now be accessible at http://localhost:7860/. It installed but gives an error when one tries to add Ollama in the flow |
Basically the same error as above in Red: Error while updating the component. |
@carlosrcoelho to prioritize |
@carlosrcoelho do you want me to pick it up along with the other bugs related to ollama with agents? @tschadha Does this occur only in docker instances of langflow? |
This is the only Langflow instance I have running. Having other issues to install on a Mac Mini M4-Pro. But the Docker setup worked but is throwing this error. Thanks for looking into this. |
@carlosrcoelho / @edwinjosechittilappilly Any ETA for this fix ? |
I solved the issue. Here is what one needs to do: Please capture this in your documentation for others
Basically when running Langflow within Docker it can't find any process outside of docker. Please document this, so others don't waste time. I spend like hours on this issue. |
im not running ollama at docket even langflow but still i see only one model which is llama3.2 but i downloaded phi-4 etc |
if you want to use any ollama model but you cant see at drop menu, you need to make some changes in python code! Simple... Just change the DropdownInput inside ChatOllamaComponent. Here is example to use with deepseek r1 and phi4;
|
Bug Description
Ollama component in Langflow does not pick up the models from Ollama server, leading to 404 error if the Llama-2 model is not installed in Ollama.
This error was observed with both Langflow docker container and running langflow directly from Python using the getting started instructions.
My Ollama instance is running directly on MacOS (not in container).
I have confirmed the following:
Langflow version: v1.0.12
Reproduction
Steps:
Expected behavior
Ollama component detects and displays models from the Ollama server, so that flows can be executed correctly.
Who can help?
No response
Operating System
MacOS 14.3.1, Ubuntu22.04
Langflow Version
1.0.12
Python Version
3.12
The text was updated successfully, but these errors were encountered: