Conversation
Summary of ChangesHello @heidi-dang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the application's capabilities by integrating Google's Gemini API, offering users a choice of large language models for assistant chat. The changes involve implementing a new Python client for Gemini, adapting the chat session to conditionally use either Claude or Gemini, and updating the user interface and documentation to reflect this new option. Concurrently, the PR introduces robust deployment tooling, including Traefik integration and a simplified deployment script, which enhances the overall developer experience and operational stability. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces Gemini API integration for the assistant chat feature, a valuable addition. The implementation includes a new Gemini client, backend logic to switch between Claude and Gemini providers, and corresponding UI updates. While the overall structure is good, I've identified a couple of critical issues. The default Gemini base URL is incorrect in both the documentation and the client code, which will prevent the feature from working out-of-the-box. Additionally, the client incorrectly processes the streaming response from the API. I've provided specific suggestions to address these problems. I also noted a minor opportunity to simplify the code by removing a redundant parameter. The inclusion of a development roadmap and deployment scripts is a great enhancement for the project.
| ### Optional: Gemini API (assistant chat only) | ||
| - `GEMINI_API_KEY` (required) | ||
| - `GEMINI_MODEL` (optional, default `gemini-1.5-flash`) | ||
| - `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta/openai`) |
There was a problem hiding this comment.
The default GEMINI_BASE_URL is incorrect. According to Google's documentation for the OpenAI compatibility layer, the base URL should be https://generativelanguage.googleapis.com/v1beta. The /openai suffix is not part of the official endpoint and will cause connection errors with the default configuration.
| - `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta/openai`) | |
| - `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta`) |
| from openai import AsyncOpenAI | ||
|
|
||
| # Default OpenAI-compatible base URL for Gemini | ||
| DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai" |
There was a problem hiding this comment.
The DEFAULT_GEMINI_BASE_URL is incorrect. The correct base URL for the Gemini API's OpenAI compatibility layer is https://generativelanguage.googleapis.com/v1beta. Using the current URL will result in failed API requests.
| DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai" | |
| DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta" |
| if delta and delta.content: | ||
| # delta.content is a list of content parts | ||
| for part in delta.content: | ||
| text = getattr(part, "text", None) or part.get("text") if isinstance(part, dict) else None | ||
| if text: | ||
| yield text |
There was a problem hiding this comment.
The logic for processing the streaming response is incorrect. For a streaming chat completion with the openai library, delta.content is a string containing the next chunk of text, not a list of parts. The current implementation iterates over the characters of this string but fails to extract any text, which will result in no output from the stream. This should be simplified to directly yield the content.
if delta and delta.content:
# The content from the streaming delta is a string, not a list of parts.
yield delta.content| async for text in stream_chat( | ||
| message, | ||
| system_prompt=self._system_prompt, | ||
| model=os.getenv("GEMINI_MODEL"), | ||
| ): |
There was a problem hiding this comment.
The model parameter passed to stream_chat is redundant. The stream_chat function already handles retrieving the model from the GEMINI_MODEL environment variable and applying a default. Removing this parameter simplifies the code and avoids duplicating the model resolution logic.
async for text in stream_chat(
message,
system_prompt=self._system_prompt,
):
Summary\n- Add Gemini API integration and related UI changes\n- Merge main into this branch to include stable workflows and repo guards\n\n## Local checks\n- python3 -m compileall -q .\n- npm -C ui run lint\n- npm -C ui run build