Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama component cannot get models from Ollama server #2885

Closed
nguyentran0212 opened this issue Jul 23, 2024 · 37 comments · Fixed by #2935, #3575 or #5748
Closed

Ollama component cannot get models from Ollama server #2885

nguyentran0212 opened this issue Jul 23, 2024 · 37 comments · Fixed by #2935, #3575 or #5748
Assignees
Labels
bug Something isn't working

Comments

@nguyentran0212
Copy link

Bug Description

Screenshot 2024-07-23 at 6 02 37 pm

Ollama component in Langflow does not pick up the models from Ollama server, leading to 404 error if the Llama-2 model is not installed in Ollama.

This error was observed with both Langflow docker container and running langflow directly from Python using the getting started instructions.

My Ollama instance is running directly on MacOS (not in container).

I have confirmed the following:

  • Ollama is up and functional (my Open WebUI instance works with the Ollama)
  • Langflow's requests do reach Ollama, confirmed via tcpdump (and the fact that Langflow receives 404 error from Ollama)
  • The refresh button to retrieve model name in the Ollama widget does not generate any request, confirmed via tcpdump

Langflow version: v1.0.12

Reproduction

Steps:

  1. Install and run a Langflow directly using Python on the same machine with Ollama
  2. Open Langflow and add Ollama component
  3. Input the URL of Ollama (e.g., http://localhost:11434 in this case)
  4. Click the refresh button next to model name in the Ollama component => model info is not retrieved
  5. Click the play button on Ollama component => Langflow shows 404 error if Ollama does not have Llama-2 model downloaded

Expected behavior

Ollama component detects and displays models from the Ollama server, so that flows can be executed correctly.

Who can help?

No response

Operating System

MacOS 14.3.1, Ubuntu22.04

Langflow Version

1.0.12

Python Version

3.12

@nguyentran0212 nguyentran0212 added the bug Something isn't working label Jul 23, 2024
Copy link

dosubot bot commented Jul 23, 2024

@nguyentran0212 Xin chào! I'm here to assist you with the issue you've encountered. I can help you solve bugs, answer questions, and even guide you on how to contribute. Let's work together to resolve this!

To resolve the issue where the Ollama component in Langflow cannot retrieve models from the Ollama server, resulting in a 404 error if the Llama-2 model is not installed, follow these steps:

  1. Ensure Ollama Server is Running: Verify that the Ollama server is running and accessible at the specified base URL (default is http://localhost:11434).

  2. Install Llama-2 Model: Make sure the Llama-2 model is installed on the Ollama server. You can refer to the Ollama documentation or use the Ollama API to install the required model.

  3. Check API Endpoint: Ensure that the API endpoint /api/models is correctly configured and accessible. This endpoint should return a list of available models in JSON format, such as:

    [
        "model1",
        "model2",
        "model3"
    ]
  4. Verify Configuration in Langflow: Ensure that the base_url for the Ollama API endpoint is correctly set in the Langflow configuration. If not specified, it defaults to 'http://localhost:11434'.

Here is the relevant part of the code that attempts to retrieve the models from the Ollama server:

def get_model(self, url: str) -> list[str]:
    try:
        with httpx.Client() as client:
            response = client.get(url)
            response.raise_for_status()
            data = response.json()

            model_names = [model["name"] for model in data.get("models", [])]
            return model_names
    except Exception as e:
        raise ValueError("Could not retrieve models. Please, make sure Ollama is running.") from e

Additionally, the OllamaEmbeddingsComponent class in Langflow is designed to use the Ollama Embeddings model. Here is the relevant part of the code:

class OllamaEmbeddingsComponent(CustomComponent):
    display_name: str = "Ollama Embeddings"
    description: str = "Embeddings model from Ollama."
    documentation = "https://python.langchain.com/docs/integrations/text_embedding/ollama"
    beta = True

    def build_config(self):
        return {
            "model": {
                "display_name": "Ollama Model",
            },
            "base_url": {"display_name": "Ollama Base URL"},
            "temperature": {"display_name": "Model Temperature"},
            "code": {"show": False},
        }

    def build(
        self,
        model: str = "llama2",
        base_url: str = "http://localhost:11434",
        temperature: Optional[float] = None,
    ) -> Embeddings:
        try:
            output = OllamaEmbeddings(model=model, base_url=base_url, temperature=temperature)  # type: ignore
        except Exception as e:
            raise ValueError("Could not connect to Ollama API.") from e
        return output

By ensuring that the Llama-2 model is installed and the Ollama server is properly configured, the component should be able to retrieve the models without encountering a 404 error [1][2].

To continue talking to Dosu, mention @dosu.

@nguyentran0212
Copy link
Author

Confirm that when I hardcode the correct model name into the Ollama component (code below), the chain works correctly

Screenshot 2024-07-23 at 6 31 43 pm

it seems the Ollama component does not pull model list of Ollama.

@NightRang3r
Copy link

I'm having the same issue It's not pulling the list of Ollma models. It worked in version 1.0.10, but something broke in versions 1.0.11 and 1.0.12.

@jordanrfrazier
Copy link
Collaborator

Hi @nguyentran0212 @NightRang3r , thanks for this submission. This was a bug I introduced in a recent PR.

@saikolasani will merge the fix asap!

@saikolasani
Copy link
Contributor

Hi @nguyentran0212 and @NightRang3r, thanks again for the submission! The issue has been fixed and is being merged. It should be available in the next release. Feel free to raise an other issue if something pops up.

@pedrobergaglio
Copy link

Hi! I'm having the same issue right here. Can you help me finding the piece of code to change? Or maybe it is better to wait for the next release. Thank you

@nguyentran0212
Copy link
Author

Howdy folks, I can confirm that the error has been fixed in version v1.0.13 (I use the Docker image). Thanks @saikolasani and @jordanrfrazier for quick fix. Now I can finally learn to use this tool.

@saikolasani
Copy link
Contributor

@pedrobergaglio Are you still experiencing the same issue?

@pedrobergaglio
Copy link

@saikolasani, it's working already. Thank you!

@sinanalmac
Copy link

Hello Everyone, I have the same issue . But when I press to refresh button on the right side of model name. I got the message below.

Error while updating the Component
invalid literal for int() with base 10: ''

image

I checked All of updates. Install and uninstall all ollama models and langflow several times. And I checked every your solutions in this topic as well
I couldn't get any solution please help me

regards
Sinan

@jordanrfrazier
Copy link
Collaborator

@sinanalmac Thanks for letting us know. Taking a look at this fix in #3497

@edwinjosechittilappilly
Copy link
Collaborator

Reopening the Issue: When tested the issue still persist when a leading / is in URL.

@kaneyxx
Copy link

kaneyxx commented Sep 5, 2024

I have the same issue when I use the newest docker version of LangFlow.
I checked my localhost that Ollama is running. I can use it very smoothly.
But my LangFlow shows cannot get model list from Ollama. Please help!
@edwinjosechittilappilly

@tbui-isgn
Copy link

I have the same issue when I use the newest docker version of LangFlow. I checked my localhost that Ollama is running. I can use it very smoothly. But my LangFlow shows cannot get model list from Ollama. Please help! @edwinjosechittilappilly

@kaneyxx I had the same issue, this made me feel incredibly frustrated, but I found out - aside from any applicable issues / configurations with Langflow that are discussed above, which may or may not be related to your case, you should read the above discussions and determine that for your own situation - my problem was not with Langflow (which I updated to the latest version which had the above fixes applied) but with the networking feature of Docker:

  • By default you can't just communicate / make API calls from inside your Docker containers to services outside of the containers / container stack (i.e. services on the host machine, as served at localhost, as in from inside the container, you can't just use the base URL http://localhost:{port})
  • The solution is that from inside the container, you need to use a special DNS name of host.docker.internal instead of localhost ==> so your base URL should be http://host.docker.internal:{port}

Refer to "How to connect to a host service from inside a container" for details.

Just posting this to help fellow users out.

@edwinjosechittilappilly
Copy link
Collaborator

HI @kaneyxx and @tbui-isgn,
Let me test it and get back to you. Happy to solve this error. Will let you know about the further updates..

@edwinjosechittilappilly
Copy link
Collaborator

I recently tested the Ollama LLM component in Langflow 1.0.18, as well as in Docker, and I’m happy to report that it’s working well. I’m considering creating a PR for the Ollama Embeddings component to improve its stability.

Additionally, I agree with @tbui-isgn that when deploying in Docker or other environments, we should ensure the Ollama URL is handled correctly to avoid any issues.

I’ve attached a video of the testing for reference.

Screen.Recording.2024-09-17.at.12.01.35.PM.mov
Screen.Recording.2024-09-17.at.12.31.04.PM.mov

@edwinjosechittilappilly
Copy link
Collaborator

Currently closing the issue. Please let us know if you face any further issues.

@kiracyc
Copy link

kiracyc commented Nov 25, 2024

This bug occurs again.
image

@kiracyc
Copy link

kiracyc commented Nov 25, 2024

could you please help to fix it?

@paritoshebbal-mfs
Copy link

paritoshebbal-mfs commented Nov 29, 2024

Facing the same error (just installed Langflow v1.1.1 using Docker). Also verified that Ollama is running.
image

image

@byronichero
Copy link

Same issue, tried everything. I have a client presentation tomorrow to extol virtues of local model usage but no chance now.

@jkovar
Copy link

jkovar commented Dec 6, 2024

Ran into the same thing. It looks like networking is not set up properly in the docker image. I did not have time to debug it, however as a workaround, using the default docker DNS should work. So, if you replace the BaseURL with http://host.docker.internal:11434 it should work - at least it did for me.

@OctAg0nO
Copy link

OctAg0nO commented Dec 7, 2024

+1

@byronichero
Copy link

byronichero commented Dec 11, 2024 via email

@tschadha
Copy link

tschadha commented Jan 9, 2025

Is there any update on this ? I just installed langflow and having the same issues with Ollama.

@tschadha
Copy link

tschadha commented Jan 9, 2025

followed the instructions:

Clone the LangFlow repository:

git clone https://github.com/langflow-ai/langflow.git

Navigate to the docker_example directory:

cd langflow/docker_example

Run the Docker Compose file:

docker compose up

LangFlow will now be accessible at http://localhost:7860/.

It installed but gives an error when one tries to add Ollama in the flow

@tschadha
Copy link

tschadha commented Jan 9, 2025

Basically the same error as above in Red:

Error while updating the component.

@jordanrfrazier
Copy link
Collaborator

@carlosrcoelho to prioritize

@edwinjosechittilappilly
Copy link
Collaborator

@carlosrcoelho do you want me to pick it up along with the other bugs related to ollama with agents?

@tschadha Does this occur only in docker instances of langflow?

@tschadha
Copy link

tschadha commented Jan 9, 2025

This is the only Langflow instance I have running. Having other issues to install on a Mac Mini M4-Pro. But the Docker setup worked but is throwing this error.

Thanks for looking into this.

@tschadha
Copy link

tschadha commented Jan 9, 2025

Here is the screen shot:
Screenshot 2025-01-08 at 9 26 45 PM

@tschadha
Copy link

tschadha commented Jan 9, 2025

Ollama is running, there are multiple models
Screenshot 2025-01-08 at 9 30 13 PM

@tschadha
Copy link

tschadha commented Jan 9, 2025

@carlosrcoelho / @edwinjosechittilappilly

Any ETA for this fix ?

@tschadha
Copy link

I solved the issue. Here is what one needs to do: Please capture this in your documentation for others

  1. Had to run Ollama within Docker
  2. Then specify the following URL from within Langflow: http://host.docker.internal:11434

Basically when running Langflow within Docker it can't find any process outside of docker.

Please document this, so others don't waste time. I spend like hours on this issue.

@tzelalouzeir
Copy link

im not running ollama at docket even langflow but still i see only one model which is llama3.2 but i downloaded phi-4 etc

@starazan
Copy link

starazan commented Jan 27, 2025

same issue here:

I run langflow:latest
I run ollama

whenever i click on refresh button on langflow ollama, it doesn't do anything

it works with langflow 1.0.19

Image ![Image](https://github.com/user-attachments/assets/0ead38a0-962e-417a-a5bb-4fd1368c165c)

@tzelalouzeir
Copy link

if you want to use any ollama model but you cant see at drop menu, you need to make some changes in python code! Simple... Just change the DropdownInput inside ChatOllamaComponent. Here is example to use with deepseek r1 and phi4;

class ChatOllamaComponent(LCModelComponent):
        # SAME FOLLOWING CODE
       
        # DropdownInput(
        #     name="model_name",
        #     display_name="Model Name",
        #     options=[],
        #     info="Refer to https://ollama.com/library for more models.",
        #     refresh_button=True,
        #     real_time_refresh=True,
        # ),
        DropdownInput(
            name="model_name",
            display_name="Model Name",
            options=["deepseek-r1:1.5b","phi4:latest"],
            info="Set manually",
        ),

       # SAME FOLLOWING CODE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment