-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google-common [feature]: Support JSON Mode for Gemini #5071 #5190
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks great! Thank you for the update!
Some thoughts below.
Two additional questions:
- Do we know if this also works on the AI Studio API?
- What happens if we include the setting on a model that doesn't support it?
* text/plain: (default) Text output. | ||
* application/json: JSON response in the candidates. | ||
*/ | ||
responseMimeType?: "text/plain" | "application/json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to see the comment!
I would suggest making the two strings a separate type and using that type here and down in GeminiGenerationConfig
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
definitely I will
@@ -218,6 +227,7 @@ export interface GeminiGenerationConfig { | |||
temperature?: number; | |||
topP?: number; | |||
topK?: number; | |||
responseMimeType?: "text/plain" | "application/json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other location to use the type I suggestd above.
options?.responseMimeType ?? | ||
params?.responseMimeType ?? | ||
target?.responseMimeType ?? | ||
"text/plain"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than put the default here, we tend to have the defaults specified in the class that implements it (ie - GoogleBaseLLM
and ChatGoogleBase
).
const authOptions: MockClientAuthInfo = { | ||
record, | ||
projectId, | ||
resultFile: "llm-8-mock.json", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the llm-8-mock.json file checked in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my bad, will push it asap
@@ -521,4 +521,30 @@ describe("Mock Google LLM", () => { | |||
expect(responseArray).toHaveLength(3); | |||
console.log("record", JSON.stringify(record, null, 2)); | |||
}); | |||
|
|||
test("8: streamGenerateContent - streaming - json responseMimeType", async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also get a test to make sure the request contains the responseMimeType
setting in the generation config?
|
just committed all the suggested changes :) |
@@ -14,7 +14,7 @@ import { | |||
GoogleAIModelParams, | |||
GoogleAISafetySetting, | |||
GooglePlatformType, | |||
GeminiContent, | |||
GeminiContent, GoogleAIResponseMimeType, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sure you run yarn format
@JackFener does your fix / commit mean that the table in Model Overview here should say that JSON modes IS supported? https://python.langchain.com/v0.2/docs/integrations/chat/google_generative_ai/ |
Fixes #5071
Added the opportunity to choose responseMimeType when calling google ai models.
Officially available since Gemini 1.5 pro (but tested finely also on Gemini 1.0 pro), you can choose to have the LLM output formatted into JSON object.
The model.stream() method still returns a string containing the JSON. I would like to add the automatic parsing of the string, but it's not always true that Gemini returns a correctly formatted json.
I also don't like to have the .call() method returning 2 different types.
So by now the user still has to JSON.parse(response) to have an object response.
I would be happy to work also on the other google related tasks.
Not very active on X but you can find me at @Giacomo_fava or on LinkedIn at @giacomofavaai. Not here to self promote, just linking my handle as the pr comment suggests