Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use Gemini with txtai #843

Closed
igorlima opened this issue Dec 26, 2024 · 6 comments
Closed

How to use Gemini with txtai #843

igorlima opened this issue Dec 26, 2024 · 6 comments
Assignees
Milestone

Comments

@igorlima
Copy link
Contributor

igorlima commented Dec 26, 2024

I've been exploring the possibilities of using Google Gemini with txtai, but I haven't found any references to Gemini in the documentation yet.

Is there a way to embed text in txtai using Gemini? The documentation references other LLMs, but Gemini seems missing.

Here's a snippet of what I've attempted using the litellm method, though I haven't had any success so far:

import os
os.environ["GEMINI_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

from txtai import LLM
llm = LLM("gemini/gemini-pro", method="litellm")
llm("Where is one place you'd go in Washington, DC?")

from txtai import Embeddings
embeddings = Embeddings(path="litellm/gemini/gemini-pro")
data = [
  "US tops 5 million confirmed virus cases",
  "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
]
embeddings.index(data)

I'd appreciate the guidance if anyone has insights or knows of any documentation or examples on using Gemini with txtai.

@davidmezzetti
Copy link
Member

Hello, first off thank you for the kind words!

What kind of error are you receiving?

@igorlima
Copy link
Contributor Author

igorlima commented Dec 26, 2024

Before jumping into the error I'm facing, let me share my setup. I use a Docker container running Python 3.12.2 on MacOS.

  • here's a quick peek at my setup:
    • I start the Docker container with this command:
      docker run \
        --name python-learning --rm \
        --mount src=`realpath .`,target=/home/local,type=bind \
        --workdir /home/local \
        -it python:3.12.2 bash
    • Since the Docker container is Linux-based, I needed to install libsndfile. You can find more info here and here. Here's a handy command to get it installed:
      apt update
      apt install libsndfile1
      
  • a Python script (main.py):

    I've been playing around with the following Python script (main.py) to test the Gemini LLM model. Here's a snippet:

    import os
    os.environ["GEMINI_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    
    from txtai import LLM
    llm = LLM("gemini/gemini-pro", method="litellm")
    llm("Where is one place you'd go in Washington, DC?")
    
    from txtai import Embeddings
    embeddings = Embeddings(path="litellm/gemini/gemini-pro")
    
    data = [
      "US tops 5 million confirmed virus cases",
      "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
      "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
      "The National Park Service warns against sacrificing slower friends in a bear attack",
      "Maine man wins $1M from $25 lottery ticket",
      "Make huge profits without work, earn up to $100,000 a day"
    ]
    
    embeddings.index(data)
    print("%-20s %s" % ("Query", "Best Match"))
    print("-" * 50)
    for query in ("feel good story", "climate change", "public health story", "war", "wildlife", "asia", "lucky", "dishonest junk"):
      uid = embeddings.search(query, 1)[0][0]
      print("%-20s %s" % (query, data[uid]))
    • requirements.txt
      # python3 -m venv my-env
      # source my-env/bin/activate
      #
      # pip3 show txtai
      # pip3 index versions txtai
      # pip3 index versions txtai | grep -E "[(][0-9]+([.][0-9]+)+[)]"
      # pip3 list
      #
      # pip3 install --no-cache --upgrade-strategy eager -I txtai==8.1.0
      #
      # pip3 install -r requirements.txt
      #
      txtai==8.1.0
      model2vec==0.3.3
      pudb==2024.1.3
      
      # For macOS, there's a runtime dependency on `libomp`.
      # Run `brew install libomp` in this case.
      #
      # For Linux, there's a runtime dependency on `libsndfile`.
      # https://github.com/bastibe/python-soundfile#installation
      # https://github.com/libsndfile/libsndfile/issues/297
      # ```
      # apt update
      # apt search libsndfile
      # apt install libsndfile1
      # apt install libsndfile1-dev
      # ```
      txtai[pipeline]

After setting everything up and running python3 main.py, I ran into this error:

  • the entire log error
    Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
    LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.12/site-packages/litellm/main.py", line 2262, in completion
        response = vertex_chat_completion.completion(  # type: ignore
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py", line 1228, in completion
        data = sync_transform_request_body(**transform_request_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 379, in sync_transform_request_body
        return _transform_request_body(
               ^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 342, in _transform_request_body
        raise e
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 307, in _transform_request_body
        content = litellm.GoogleAIStudioGeminiConfig()._transform_messages(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/gemini/chat/transformation.py", line 131, in _transform_messages
        return _gemini_convert_messages_with_history(messages=messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 262, in _gemini_convert_messages_with_history
        raise e
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 253, in _gemini_convert_messages_with_history
        raise Exception(
    Exception: Invalid Message passed in - {'content': "Where is one place you'd go in Washington, DC?", 'role': 'prompt'}. File an issue https://github.com/BerriAI/litellm/issues
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/local/python/2024a11m24d-txtai-embeddings/main-gemini.py", line 32, in <module>
        llm("Where is one place you'd go in Washington, DC?")
      File "/usr/local/lib/python3.12/site-packages/txtai/pipeline/llm/llm.py", line 63, in __call__
        return self.generator(text, maxlength, stream, stop, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/txtai/pipeline/llm/generation.py", line 54, in __call__
        results = self.execute(texts, maxlength, stream, stop, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/txtai/pipeline/llm/generation.py", line 86, in execute
        return list(self.stream(texts, maxlength, stream, stop, **kwargs))
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/txtai/pipeline/llm/litellm.py", line 60, in stream
        result = api.completion(
                 ^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/utils.py", line 987, in wrapper
        raise e
      File "/usr/local/lib/python3.12/site-packages/litellm/utils.py", line 868, in wrapper
        result = original_function(*args, **kwargs)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/main.py", line 3012, in completion
        raise exception_type(
              ^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2146, in exception_type
        raise e
      File "/usr/local/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2122, in exception_type
        raise APIConnectionError(
    litellm.exceptions.APIConnectionError: litellm.APIConnectionError: Invalid Message passed in - {'content': "Where is one place you'd go in Washington, DC?", 'role': 'prompt'}. File an issue https://github.com/BerriAI/litellm/issues
    Traceback (most recent call last):
      File "/usr/local/lib/python3.12/site-packages/litellm/main.py", line 2262, in completion
        response = vertex_chat_completion.completion(  # type: ignore
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py", line 1228, in completion
        data = sync_transform_request_body(**transform_request_params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 379, in sync_transform_request_body
        return _transform_request_body(
               ^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 342, in _transform_request_body
        raise e
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 307, in _transform_request_body
        content = litellm.GoogleAIStudioGeminiConfig()._transform_messages(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/gemini/chat/transformation.py", line 131, in _transform_messages
        return _gemini_convert_messages_with_history(messages=messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 262, in _gemini_convert_messages_with_history
        raise e
      File "/usr/local/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/transformation.py", line 253, in _gemini_convert_messages_with_history
        raise Exception(
    Exception: Invalid Message passed in - {'content': "Where is one place you'd go in Washington, DC?", 'role': 'prompt'}. File an issue https://github.com/BerriAI/litellm/issues
  • image A image B
    image image

In short, it outputs the following error message:

Exception: Invalid Message passed in - {'content': "Where is one place you'd go in Washington, DC?", 'role': 'prompt'}. File an issue https://github.com/BerriAI/litellm/issues

@davidmezzetti
Copy link
Member

Looks like you're running into the same issue as this: #841

The original LLM pipeline was designed to work with raw prompts (i.e. with chat templates manually applied). Over time this has changed where chat messages are more the default. The issue above added a new defaultrole parameter to address this. At some point soon, the switch will be flipped to make the defaultrole="user" and issues like this will go away.

Nonetheless, here are the options in the meantime.

  1. Pass chat messages instead of strings to the LLM pipeline.
llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}])
  1. Install txtai from GitHub and set the defaultrole
llm("Where is one place you'd go in Washington, DC?", defaultrole="user")

Now with the Embeddings, all of the supported models are documented here: https://docs.litellm.ai/docs/embedding/supported_embedding

I'm not exactly sure if Gemini models generate embeddings and/or if they are supported by LiteLLM though.

@igorlima
Copy link
Contributor Author

Absolutely; thank you so much for the helpful hint! 🙏 🕺

The resource shared in the previous comment was a game-changer. It successfully guided me through running the Gemini LLM pipeline and generating embeddings using Gemini. Here's a snippet of the code that worked seamlessly with the Gemini LLM model:

  • Gemini code snippet
    import os
    os.environ["GEMINI_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    
    from txtai import LLM
    # https://neuml.github.io/txtai/install/
    # https://neuml.github.io/txtai/pipeline/text/llm/#example
    # https://ai.google.dev/gemini-api/docs/models/gemini
    llm = LLM("gemini/gemini-pro", method="litellm")
    # print(llm("Where is one place you'd go in Washington, DC?", defaultrole="user"))
    print(llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}]))
    
    from txtai import Embeddings
    # https://neuml.github.io/txtai/embeddings/configuration/vectors/#method
    # https://docs.litellm.ai/docs/providers/gemini
    # https://github.com/BerriAI/litellm/tree/12c4e7e695edb07d403dd14fc768a736638bd3d1/litellm/llms/vertex_ai
    # https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/model_prices_and_context_window.json#L2625
    embeddings = Embeddings(path="gemini/text-embedding-004", method="litellm")
    
    # works with a list, dataset or generator
    data = [
      "US tops 5 million confirmed virus cases",
      "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
      "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
      "The National Park Service warns against sacrificing slower friends in a bear attack",
      "Maine man wins $1M from $25 lottery ticket",
      "Make huge profits without work, earn up to $100,000 a day"
    ]
    
    # create an index for the list of text
    embeddings.index(data)
    print("%-20s %s" % ("Query", "Best Match"))
    print("-" * 50)
    # run an embeddings search for each query
    for query in ("feel good story", "climate change", "public health story", "war", "wildlife", "asia", "lucky", "dishonest junk"):
      # extract uid of first result
      # search result format: (uid, score)
      uid = embeddings.search(query, 1)[0][0]
      # print text
      print("%-20s %s" % (query, data[uid]))

    image

Additionally, I’ve prepared four more samples for other models: VertexAI, Mistral, Cohere, and AWS Bedrock. Anyone searching for examples of these models can find them right here.

  • other LLM models
    • VertexAI code snippet
      import os
      os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "application_default_credentials.json"
      
      import litellm
      # https://docs.litellm.ai/docs/embedding/supported_embedding#usage---embedding
      # https://docs.litellm.ai/docs/providers/vertex
      litellm.vertex_project = "hardy-device-38811" # Your Project ID
      litellm.vertex_location = "us-central1"  # proj location
      
      from txtai import LLM
      # https://neuml.github.io/txtai/install/
      # https://neuml.github.io/txtai/pipeline/text/llm/#example
      llm = LLM("vertex_ai/gemini-pro", method="litellm")
      
      # print(llm("Where is one place you'd go in Washington, DC?", defaultrole="user"))
      print(llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}]))
      
      from txtai import Embeddings
      # https://neuml.github.io/txtai/embeddings/configuration/vectors/#method
      # https://docs.litellm.ai/docs/providers/vertex
      # embeddings = Embeddings(path="text-embedding-004", method="litellm")
      embeddings = Embeddings(path="vertex_ai/text-embedding-004", method="litellm")
      
      # works with a list, dataset or generator
      data = [
        "US tops 5 million confirmed virus cases",
        "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
        "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
        "The National Park Service warns against sacrificing slower friends in a bear attack",
        "Maine man wins $1M from $25 lottery ticket",
        "Make huge profits without work, earn up to $100,000 a day"
      ]
      
      # create an index for the list of text
      embeddings.index(data)
      print("%-20s %s" % ("Query", "Best Match"))
      print("-" * 50)
      # run an embeddings search for each query
      for query in ("feel good story", "climate change", 
          "public health story", "war", "wildlife", "asia",
          "lucky", "dishonest junk"):
        # extract uid of first result
        # search result format: (uid, score)
        uid = embeddings.search(query, 1)[0][0]
        # print text
        print("%-20s %s" % (query, data[uid]))
    • Mistral code snippet
      import os
      os.environ["MISTRAL_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      
      from txtai import LLM
      # https://neuml.github.io/txtai/install/
      # https://neuml.github.io/txtai/pipeline/text/llm/#example
      llm = LLM("mistral/mistral-tiny", method="litellm")
      # print(llm("Where is one place you'd go in Washington, DC?", defaultrole="user"))
      print(llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}]))
      
      import litellm
      litellm.set_verbose=False
      
      from txtai import Embeddings
      # https://neuml.github.io/txtai/embeddings/configuration/vectors/#method
      # https://docs.litellm.ai/docs/providers/mistral
      embeddings = Embeddings(path="mistral/mistral-embed", method="litellm")
      
      # works with a list, dataset or generator
      data = [
        "US tops 5 million confirmed virus cases",
        "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
        "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
        "The National Park Service warns against sacrificing slower friends in a bear attack",
        "Maine man wins $1M from $25 lottery ticket",
        "Make huge profits without work, earn up to $100,000 a day"
      ]
      
      # create an index for the list of text
      embeddings.index(data)
      print("%-20s %s" % ("Query", "Best Match"))
      print("-" * 50)
      # run an embeddings search for each query
      for query in ("feel good story", "climate change", 
          "public health story", "war", "wildlife", "asia",
          "lucky", "dishonest junk"):
        # extract uid of first result
        # search result format: (uid, score)
        uid = embeddings.search(query, 1)[0][0]
        # print text
        print("%-20s %s" % (query, data[uid]))
    • Cohere code snippet
      import os
      os.environ["COHERE_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      
      from txtai import LLM
      # https://neuml.github.io/txtai/install/
      # https://neuml.github.io/txtai/pipeline/text/llm/#example
      llm = LLM("command-r", method="litellm")
      
      # print(llm("Where is one place you'd go in Washington, DC?", defaultrole="user"))
      print(llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}]))
      
      
      from txtai import Embeddings
      # https://neuml.github.io/txtai/embeddings/configuration/vectors/#method
      # https://docs.litellm.ai/docs/providers/cohere
      embeddings = Embeddings(path="cohere/embed-english-v3.0", method="litellm")
      
      # works with a list, dataset or generator
      data = [
        "US tops 5 million confirmed virus cases",
        "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
        "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
        "The National Park Service warns against sacrificing slower friends in a bear attack",
        "Maine man wins $1M from $25 lottery ticket",
        "Make huge profits without work, earn up to $100,000 a day"
      ]
      
      # create an index for the list of text
      embeddings.index(data)
      print("%-20s %s" % ("Query", "Best Match"))
      print("-" * 50)
      # run an embeddings search for each query
      for query in ("feel good story", "climate change", 
          "public health story", "war", "wildlife", "asia",
          "lucky", "dishonest junk"):
        # extract uid of first result
        # search result format: (uid, score)
        uid = embeddings.search(query, 1)[0][0]
        # print text
        print("%-20s %s" % (query, data[uid]))
    • AWS Bedrock code snippet
      import os
      os.environ["AWS_ACCESS_KEY_ID"] = "xxxxxxxxxxxxxxxxxxxx"  # Access key
      os.environ["AWS_SECRET_ACCESS_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Secret access key
      os.environ["AWS_REGION_NAME"] = "us-west-2" # us-east-1, us-east-2, us-west-1, us-west-2
      
      from txtai import LLM
      # https://neuml.github.io/txtai/install/
      # https://neuml.github.io/txtai/pipeline/text/llm/#example
      llm = LLM("bedrock/amazon.titan-text-lite-v1", method="litellm")
      
      # print(llm("Where is one place you'd go in Washington, DC?", defaultrole="user"))
      print(llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}]))
      
      
      from txtai import Embeddings
      # https://neuml.github.io/txtai/embeddings/configuration/vectors/#method
      # https://docs.litellm.ai/docs/providers/bedrock
      embeddings = Embeddings(path="amazon.titan-embed-text-v1", method="litellm")
      
      # works with a list, dataset or generator
      data = [
        "US tops 5 million confirmed virus cases",
        "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
        "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
        "The National Park Service warns against sacrificing slower friends in a bear attack",
        "Maine man wins $1M from $25 lottery ticket",
        "Make huge profits without work, earn up to $100,000 a day"
      ]
      
      # create an index for the list of text
      embeddings.index(data)
      print("%-20s %s" % ("Query", "Best Match"))
      print("-" * 50)
      # run an embeddings search for each query
      for query in ("feel good story", "climate change", 
          "public health story", "war", "wildlife", "asia",
          "lucky", "dishonest junk"):
        # extract uid of first result
        # search result format: (uid, score)
        uid = embeddings.search(query, 1)[0][0]
        # print text
        print("%-20s %s" % (query, data[uid]))
    • requirements.txt
      # python3 -m venv my-env
      # source my-env/bin/activate
      #
      # pip3 show txtai
      # pip3 index versions txtai
      # pip3 index versions txtai | grep -E "[(][0-9]+([.][0-9]+)+[)]"
      # pip3 list
      #
      # pip3 install --no-cache --upgrade-strategy eager -I txtai==8.1.0
      #
      # pip3 install -r requirements.txt
      #
      txtai==8.1.0
      model2vec==0.3.3
      pudb==2024.1.3
      
      # For macOS, there's a runtime dependency on `libomp`.
      # Run `brew install libomp` in this case.
      #
      # For Linux, there's a runtime dependency on `libsndfile`.
      # https://github.com/bastibe/python-soundfile#installation
      # https://github.com/libsndfile/libsndfile/issues/297
      # ```
      # apt update
      # apt search libsndfile
      # apt install libsndfile1
      # apt install libsndfile1-dev
      # ```
      txtai[pipeline]
      
      # to use Vertex AI
      # https://github.com/BerriAI/litellm/issues/5483
      google-cloud-aiplatform==1.75.0
      
      # to use AWS Bedrock
      # https://docs.litellm.ai/docs/providers/bedrock
      boto3==1.35.88

Once again, thank you for your support and guidance! 🙌 🎉

@davidmezzetti
Copy link
Member

I'm glad this worked!

One minor thing, you shouldn't need method='litellm' but it doesn't hurt either.

I appreciate you documenting how to do this with all the model endpoints you did, thank you!

@igorlima
Copy link
Contributor Author

This issue has not only helped improve the documentation for the litellm library but also brought an opportunity to enhance the documentation here.

Below is a proposed PR to add these embedding usages in the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants