Skip to content

Gemini as model client with autogen 0.5 #6258

@sonnylaskar

Description

@sonnylaskar

What happened?

Describe the bug
I am referring to https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/models.html#gemini-experimental but the documentation seems incomplete

To Reproduce

from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
...     model="gemini-2.0-flash-lite",
...     api_key='<>',
... )

..._model_info.py", line 314, in get_info
    raise ValueError("model_info is required when model name is not a valid OpenAI model")
ValueError: model_info is required when model name is not a valid OpenAI model

How do I get it to work with Gemini.
I am using v0.5.1

Which packages was the bug in?

Python Extensions (autogen-ext)

AutoGen library version.

Python 0.5.1

Other library version.

No response

Model used

gemini-2.0-flash-lite

Model provider

Google Gemini

Other model provider

No response

Python version

3.12

.NET version

None

Operating system

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions