Skip to content

Commit 08f252c

Browse files
Gemini 2.0 (#1773)
* Update llms.mdx (Gemini 2.0) - Add Gemini 2.0 flash to Gemini table. - Add link to 2 hosting paths for Gemini in Tip. - Change to lower case model slugs vs names, user convenience. - Add https://artificialanalysis.ai/ as alternate leaderboard. - Move Gemma to "other" tab. * Update llm.py (gemini 2.0) Add setting for Gemini 2.0 context window to llm.py --------- Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
1 parent 6ff669e commit 08f252c

File tree

2 files changed

+12
-6
lines changed

2 files changed

+12
-6
lines changed

docs/concepts/llms.mdx

+11-6
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
2929

3030
## Available Models and Their Capabilities
3131

32-
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/):
32+
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/):
3333

3434
<Tabs>
3535
<Tab title="OpenAI">
@@ -121,12 +121,18 @@ Here's a detailed breakdown of supported models and their capabilities, you can
121121
<Tab title="Gemini">
122122
| Model | Context Window | Best For |
123123
|-------|---------------|-----------|
124-
| Gemini 1.5 Flash | 1M tokens | Balanced multimodal model, good for most tasks |
125-
| Gemini 1.5 Flash 8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
126-
| Gemini 1.5 Pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
124+
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
125+
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
126+
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
127+
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
127128

128129
<Tip>
129130
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
131+
132+
These models are available via API_KEY from
133+
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
134+
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
135+
[Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models).
130136
</Tip>
131137
</Tab>
132138
<Tab title="Groq">
@@ -135,7 +141,6 @@ Here's a detailed breakdown of supported models and their capabilities, you can
135141
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
136142
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
137143
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
138-
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
139144

140145
<Tip>
141146
Groq is known for its fast inference speeds, making it suitable for real-time applications.
@@ -146,7 +151,7 @@ Here's a detailed breakdown of supported models and their capabilities, you can
146151
|----------|---------------|--------------|
147152
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
148153
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
149-
| Gemini | Varies by model | Multimodal capabilities |
154+
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
150155

151156
<Info>
152157
Provider selection should consider factors like:

src/crewai/llm.py

+1
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ def flush(self):
4444
"o1-preview": 128000,
4545
"o1-mini": 128000,
4646
# gemini
47+
"gemini-2.0-flash": 1048576,
4748
"gemini-1.5-pro": 2097152,
4849
"gemini-1.5-flash": 1048576,
4950
"gemini-1.5-flash-8b": 1048576,

0 commit comments

Comments
 (0)