Switch to Groq LLM and fix AI sidebar focus#1192
Conversation
WalkthroughThis update switches AI-related backend components from using OpenAI's GPT-4o model to Groq's "meta-llama/llama-4-maverick-17b-128e-instruct" model. It also adjusts frontend logic for the AI sidebar and AI chat editor focus, and simplifies bulk selection and error handling in the mail UI. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant MailUI
participant AISidebar
participant AIChatEditor
User->>MailUI: Opens AI Sidebar
MailUI->>AISidebar: Set aiSidebarOpen = true
AISidebar->>AIChatEditor: useEffect triggers focus if aiSidebarOpen == 'true'
AIChatEditor->>User: Editor is focused
sequenceDiagram
participant Backend
participant GroqModel
Backend->>GroqModel: Request completion (streamText or object generation)
GroqModel-->>Backend: Respond with meta-llama/llama-4-maverick-17b-128e-instruct output
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/server/src/trpc/routes/ai/search.ts (1)
11-11: Model migration implemented correctly.The migration to Groq's model is clean and consistent in this file.
Consider extracting the model name to a shared constant to avoid duplication across files and make future model changes easier:
+import { GROQ_MODEL } from '../../../lib/constants'; - model: groq('meta-llama/llama-4-maverick-17b-128e-instruct'), + model: groq(GROQ_MODEL),Create
apps/server/src/lib/constants.ts:export const GROQ_MODEL = 'meta-llama/llama-4-maverick-17b-128e-instruct';
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (6)
apps/mail/components/create/ai-chat.tsx(3 hunks)apps/mail/components/mail/mail.tsx(0 hunks)apps/mail/components/ui/ai-sidebar.tsx(1 hunks)apps/server/package.json(1 hunks)apps/server/src/routes/chat.ts(2 hunks)apps/server/src/trpc/routes/ai/search.ts(1 hunks)
💤 Files with no reviewable changes (1)
- apps/mail/components/mail/mail.tsx
🔇 Additional comments (8)
apps/mail/components/ui/ai-sidebar.tsx (1)
312-312: LGTM! Clean refactor of the toggleOpen function.The simplified one-liner maintains the same functionality while being more concise and readable. The logic correctly toggles the sidebar state by checking if
open !== 'true'.apps/mail/components/create/ai-chat.tsx (4)
29-29: Good decision to prevent sidebar state reset on thread clicks.Commenting out the
setAiSidebarOpencalls prevents the sidebar from closing unexpectedly when users click on threads, which improves the user experience by maintaining the sidebar state.Also applies to: 34-34
278-278: LGTM! Clean separation of concerns.Adding a separate
aiSidebarOpenstate variable for reading the sidebar state is a good approach that separates the concerns of reading vs. setting the sidebar state.
302-306: Excellent UX improvement with automatic editor focus.The useEffect correctly focuses the editor when the sidebar opens, which improves the user experience by making the chat immediately ready for input. The dependency array is properly configured with both
aiSidebarOpenandeditor.
1-515: Note: AI summary mentions model switching, but not evident in this file.The AI summary mentions switching from OpenAI's GPT-4o model to Groq's "meta-llama/llama-4-maverick-17b-128e-instruct" model, but the changes in this file are focused on UI sidebar management and editor focus. The model switching likely occurs in other files not included in this review.
Likely an incorrect or invalid review comment.
🧰 Tools
🪛 Biome (1.9.4)
[error] 354-354: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
apps/server/package.json (1)
23-23:Details
❓ Verification inconclusive
Dependency addition looks good.
The addition of the Groq SDK dependency aligns with the migration from OpenAI to Groq models in the codebase.
Verify that this package version is secure and compatible:
🏁 Script executed:
#!/bin/bash # Check for security advisories and latest version info npm audit @ai-sdk/groq@1.2.9 npm view @ai-sdk/groq versions --json | jq '.[-5:]'Length of output: 541
🏁 Script executed:
#!/bin/bash # Generate a lockfile and audit the @ai-sdk/groq dependency npm install --package-lock-only npm audit --json | jq '.vulnerabilities["@ai-sdk/groq"]'Length of output: 1112
🏁 Script executed:
#!/bin/bash # Verify that version 1.2.9 exists and inspect engine compatibility npm view @ai-sdk/groq@1.2.9 version engines --jsonLength of output: 116
Verify Node requirement and security for @ai-sdk/groq@1.2.9
- @ai-sdk/groq@1.2.9 requires Node >= 18 – please confirm your runtime meets this.
- We couldn’t run
npm auditin this workspace setup. Before merging, run a full audit in your local environment with a proper lockfile or review the npm advisory database for any known vulnerabilities in 1.2.9.apps/server/src/routes/chat.ts (1)
27-27: Import addition looks good.The Groq import is properly added to support the model migration.
apps/server/src/trpc/routes/ai/search.ts (1)
3-4: Import changes look good.The import updates correctly reflect the migration from OpenAI to Groq SDK.
|
|
||
| const result = streamText({ | ||
| model: openai('gpt-4o'), | ||
| model: groq('meta-llama/llama-4-maverick-17b-128e-instruct'), |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Model migration implemented correctly, but incomplete.
The migration to Groq's model is implemented correctly in the ZeroAgent class. However, this creates an inconsistent state in the codebase.
This file still has OpenAI model usage at lines 131 and 445 in the buildGmailSearchQuery implementations. Consider:
- Complete the migration by updating all remaining OpenAI usages to Groq
- Document the rationale if different models should be used for different operations
- Extract the model name to a constant to avoid repetition
+const GROQ_MODEL = 'meta-llama/llama-4-maverick-17b-128e-instruct';
- model: groq('meta-llama/llama-4-maverick-17b-128e-instruct'),
+ model: groq(GROQ_MODEL),Verify all OpenAI model usages in the codebase to ensure consistent migration:
🏁 Script executed:
#!/bin/bash
# Search for remaining OpenAI model usage patterns
rg -A 2 -B 2 "openai\('.*'\)" apps/server/src/
rg -A 2 -B 2 "gpt-4o" apps/server/src/Length of output: 2675
Complete Groq Model Migration Across the Codebase
The migration to Groq’s model in ZeroAgent is correct, but there are still OpenAI calls that need updating:
• apps/server/src/trpc/routes/ai/compose.ts
– model: openai('gpt-4o-mini')
– model: openai('gpt-4o')
• apps/server/src/routes/chat.ts
– model: openai('gpt-4o') in both generateText and generateObject calls
Recommendations:
- Extract the Groq model name to a constant:
const GROQ_MODEL = 'meta-llama/llama-4-maverick-17b-128e-instruct';
- Replace all
openai('…')usages withgroq(GROQ_MODEL)(or a variant, if you intentionally need different models, document that). - Add/update a comment in each file explaining why Groq is used instead of OpenAI.
Example diff (adapt for each occurrence):
+ const GROQ_MODEL = 'meta-llama/llama-4-maverick-17b-128e-instruct';
- model: openai('gpt-4o'),
+ model: groq(GROQ_MODEL),🤖 Prompt for AI Agents
In apps/server/src/routes/chat.ts at line 61 and also at lines 131 and 445, the
code still uses OpenAI models in the buildGmailSearchQuery implementations,
causing inconsistency after migrating to Groq models. To fix this, extract the
Groq model name 'meta-llama/llama-4-maverick-17b-128e-instruct' into a constant
at the top of the file, replace all openai('gpt-4o') usages with
groq(GROQ_MODEL), and add a comment explaining why Groq is used instead of
OpenAI for clarity and consistency.

Switch to Groq LLM and Improve AI Sidebar Focus
This PR switches the AI model from OpenAI's GPT-4o to Groq's Llama-4-Maverick for both the ZeroAgent and search query generation. It also improves the AI sidebar UX by automatically focusing the editor when the sidebar is opened.
Type of Change
Areas Affected
Testing Done
Checklist
Additional Notes
The PR includes:
Summary by CodeRabbit
New Features
Refactor
Chores