-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GEMINI-1.5-PRO Main Day-1 support🧵 #2881
Comments
See also: https://ai.google.dev/models/gemini |
To detemine whether you are seeing a limitation related to your key use this test: If you have a key that is good for gemini pro 1.0 but not 1.5 you will get a response for this request:
|
cc: @Manouchehri you might have some context on this - #2841 |
@jpshack-at-palomar Will also update our exception here
To point to this ticket. Thanks for this! |
I think you're right too. That said, Gemini's naming conventions do seem to barely follow any conventions at times, so I wouldn't be shocked if they change naming when it goes into public preview. 😅 |
got it working if you force the update of
LLM response formatted: Saying Hi from LiteLLM: Code OptionsHere are a few ways to write code for saying hi from LiteLLM, depending on your desired output and programming language: Python: print("Hi from LiteLLM!") JavaScript: console.log("Hi from LiteLLM!"); C++: #include <iostream>
int main() {
std::cout << "Hi from LiteLLM!" << std::endl;
return 0;
} Java: public class Main {
public static void main(String[] args) {
System.out.println("Hi from LiteLLM!");
}
} Using a Function: def say_hi():
print("Hi from LiteLLM!")
say_hi() This code defines a function called Adding User Input: name = input("What is your name? ")
print(f"Hi {name}, from LiteLLM!") This code asks the user for their name and then includes it in the greeting. Choosing the Right Code:
Remember to choose the code that best suits your specific needs and programming language. |
I just received access to gemini/gemini-1.5-pro-latest this evening and can confirm @jameshiggie's result with the following versions:
I will open a PR for a correction to https://docs.litellm.ai/docs/providers/gemini to show the model name as |
testing on my end as well - i believe the new genai module also supports system instructions |
pinning thread, for anyone else making issues on the new gemini updates. will be easier to consolidate discussion here. |
Is there a plan to get Async Streaming working for "google AI Studio - gemini" as well? In testing its great but for our use case we need streaming :) |
Hey @jameshiggie this should already be working. Do you see an error? |
running the example case:
fails :( output:
|
ok - i'll work on repro'ing + push a fix @jameshiggie |
thanks! 🔥 |
after leaving the example case. I tried it out in a dev version of our product and it works! :D Using same versions for both tho... :S
A little disappointed its very chunky streaming from google, i'm guessing due to safety screening. But easy fix and can deal with that with some stream buffering on our side. thanks for helping with all of this again :) |
@jameshiggie so this is a no-op? |
the example case still fails in a fresh venv with the same error above, not sure why.
the chunking and yield is slightly diff between the two. It'll be good to have some example code like this for the litellm docs that people can run quickly to test |
I'm seeing a similar issue - it works on ci/cd but throws the responseiterator issue on local. Not sure why. I see a similar hang when i use the raw google sdk. So i wonder if it's something in my environment. I'm able to get it working via curl though. Will investigate and aim to have a better solution here by tomorrow @jameshiggie |
ok thanks! 🏋️ |
Sorry that I made a new issue just for supporting the system message for Gemini 1.5 pro #2963. Also I can confirm that the system message is supported by the playground UI in Google Ai Studio, but unfortunately I can't find the API documentation of how to input the system message |
Is |
I'm using |
@CXwudi system message is already supported for gemini - google ai studio - litellm/litellm/llms/gemini.py Line 146 in 6e934cb
@aleclarson
@aleclarson Curious - how do you use our model_prices json? |
Pending items:
|
@krrishdholakia My PR which adds LiteLLM support to Aider needs it for various metadata (most importantly, which models are supported). See here: https://github.com/paul-gauthier/aider/pull/549/files#diff-da3f6418cba825fc2eac007d80f318784be5cf8f0f9a27433e2693338ca4c8b9R114 |
@krrishdholakia You may want to merge #2964 before starting on Vertex AI system message. |
@aleclarson got it. The right way to check is provider. For eg.- any model on togetherai could be called via litellm - This might not be in the map (which is used for tracking price and context window for popular models) You can check which providers we support via - Line 466 in cd834e9
|
@krrishdholakia Good to know. Looks like the |
The purpose is to avoid needing to upgrade litellm, to get new models. Would welcome any improvement here. Related issue: #411 (comment) |
The code is correct, but looks like we need to upgrade the |
I am using vertex_ai/gemini-1.5-pro-preview-0409 which should support function calls. However, using it with the proxy never returns a valid function call and crashes. It works well with other models. Any idea what I am doing wrong? litellm | 17:11:27 - LiteLLM Router:DEBUG: router.py:1184 - TracebackTraceback (most recent call last): |
Just added. should be fixed now @CXwudi This was caused b/c we run a check if the gemini model supports vision |
@demux79 tested on my end with our function calling test -
Works fine. I suspect this is an issue with the endpoint returning a weird response (maybe None?). If you're able to repro with |
@krrishdholakia Thanks. Indeed, Gemini returns None for my function call. With a simple function call it works.
GPT-4 and Opus handle my more complicated function call quite well. It seems Gemini just isn't quite up to the same standard then ;)
|
For some reason, it is still not fixed for me. The last working version I tried is |
The problem is now solved with #3186, which is working in |
What happened?
This is a placeholder for others who have this issue. There is likely no bug that needs to be fixed in LiteLLM but we won't know until more people have access to the
gemini-1.5-pro
API. There is some evidence that the model will actually be provided asgemini-1.5-pro-latest
. Note that this issue DOES NOT relate to the Vertex API which is a different API and LiteLLM provider.Source code:
See https://docs.litellm.ai/docs/providers/gemini#pre-requisites
Versions
Exception:
See the upstream issue here: google-gemini/generative-ai-python#227
Relevant log output
No response
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: