-
Notifications
You must be signed in to change notification settings - Fork 489
fix: security vulnerability in server #401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…SON-RPC API - Implemented a new method to retrieve usage data from the Codex app-server, providing real-time data and improving reliability. - Updated the fetchUsageData method to prioritize app-server data over fallback methods. - Added detailed logging for better traceability and debugging. - Removed unused methods related to OpenAI API usage and Codex CLI requests, streamlining the service. These changes enhance the functionality and robustness of the CodexUsageService, ensuring accurate usage statistics retrieval.
- Deleted the AI profile management feature, including all associated views, hooks, and types. - Updated settings and navigation components to remove references to AI profiles. - Adjusted local storage and settings synchronization logic to reflect the removal of AI profiles. - Cleaned up tests and utility functions that were dependent on the AI profile feature. These changes streamline the application by eliminating unused functionality, improving maintainability and reducing complexity.
refactor: remove AI profile functionality and related components
…gement - Bumped version numbers for @automaker/server and @automaker/ui to 0.9.0 in package-lock.json. - Introduced CodexAppServerService and CodexModelCacheService to manage communication with the Codex CLI's app-server and cache model data. - Updated CodexUsageService to utilize app-server for fetching usage data. - Enhanced Codex routes to support fetching available models and integrated model caching. - Improved UI components to dynamically load and display Codex models, including error handling and loading states. - Added new API methods for fetching Codex models and integrated them into the app store for state management. These changes improve the overall functionality and user experience of the Codex integration, ensuring efficient model management and data retrieval.
- Eliminated CodexCreditsSnapshot interface and related logic from CodexUsageService and UI components. - Updated CodexUsageSection to display only plan type, removing credits information for a cleaner interface. - Streamlined Codex usage formatting functions by removing unused credit formatting logic. These changes simplify the Codex usage management by focusing on plan types, enhancing clarity and maintainability.
Move .codex/config.toml to .gitignore to prevent accidental commits of API keys. The file will remain local to each user's setup. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Add error logging to CodexProvider auth check instead of silent failure - Fix cachedAt timestamp to return actual cache time instead of request time - Replace misleading hardcoded rate limit values (100) with sentinel value (-1) - Fix unused parameter warning in codex routes Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
feat: improve codex plan and usage detection
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughThis PR introduces Codex model caching and retrieval infrastructure while comprehensively removing the AI Profiles feature. New services handle app-server communication via JSON-RPC and disk-based model caching. Simultaneously, all AI profile UI components, settings, store management, and related keyboard shortcuts are eliminated across the codebase. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as UI / App
participant Server as Automaker Server
participant Cache as Model Cache
participant AppServer as Codex App-Server
UI->>Server: GET /codex/models
activate Server
Server->>Cache: getModels()
activate Cache
alt Cache is fresh
Cache-->>Server: return cached models
else Cache expired/missing
Cache->>AppServer: spawn process & send JSON-RPC
activate AppServer
AppServer-->>Cache: models data
deactivate AppServer
Cache->>Cache: transform & persist to disk
Cache-->>Server: return fresh models
end
deactivate Cache
Server-->>UI: models with metadata
deactivate Server
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
✨ Finishing touches
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (56)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @Shironex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a critical security update by upgrading a core SDK dependency, addressing a high-severity vulnerability. Concurrently, it introduces a significant architectural overhaul to the Codex CLI integration, transitioning to a more robust and reliable JSON-RPC app-server communication layer for model and usage data. This refactoring enhances stability and enables dynamic model fetching. A major functional change is the complete removal of the 'AI Profiles' feature, simplifying the application's model management approach. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is a significant refactoring that improves the architecture for Codex integration by using the app-server JSON-RPC API. The new services for handling the app server and caching models are well-designed. However, the PR description is misleading as it omits the complete removal of the "AI Profiles" feature, a major change. My review focuses on a functional regression where a hardcoded model is used instead of user-configured defaults, and a point of brittleness in the new model tier inference logic. Despite these issues, the overall direction is a solid improvement.
| model: 'opus', | ||
| thinkingLevel: 'none' as const, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When creating a feature from a GitHub issue, the model is now hardcoded to 'opus' and thinkingLevel to 'none'. This is a regression in functionality, as it previously used the user's default AI profile. To restore flexibility, consider using the default model configured for the featureGenerationModel phase from the application settings (phaseModels). This would respect the user's model preferences for feature generation tasks. You'll need to pull phaseModels from the useAppStore hook to implement this.
| private inferTier(modelId: string): 'premium' | 'standard' | 'basic' { | ||
| if (modelId.includes('max') || modelId.includes('gpt-5.2-codex')) { | ||
| return 'premium'; | ||
| } | ||
| if (modelId.includes('mini')) { | ||
| return 'basic'; | ||
| } | ||
| return 'standard'; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The inferTier method relies on string matching (.includes()) on the model ID to determine the tier. This approach is brittle and may break if Codex model naming conventions change in the future. If the API doesn't provide this information directly, consider adding a comment to highlight this dependency on naming conventions for future maintenance.
Summary by CodeRabbit
New Features
Removed Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.