-
-
Notifications
You must be signed in to change notification settings - Fork 273
Improve roadmap visualization #1514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary by CodeRabbit
Summary by CodeRabbit
WalkthroughThe changes refactor the About page's roadmap section to display recent project milestones dynamically, using data from the backend via GraphQL. Backend GraphQL nodes and queries were updated to expose milestone body and progress fields. Frontend components, types, queries, and tests were modified to consume and render this enhanced milestone data in a visually structured manner. Changes
Assessment against linked issues
Suggested reviewers
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (4)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
frontend/src/app/about/page.tsx (1)
162-194: Great implementation of dynamic milestone visualization with some suggestions for improvement.The refactor from static roadmap to dynamic milestone cards is well-executed with proper styling, progress visualization, and responsive design. However, consider these improvements:
Suggestions for enhancement:
- Empty state handling: Add a fallback when no milestones are available:
<div className="grid gap-4"> + {projectMetadata.recentMilestones.length === 0 ? ( + <p className="text-gray-500 dark:text-gray-400">No recent milestones available.</p> + ) : ( {projectMetadata.recentMilestones.map((milestone, index) => ( // ... existing card content ))} + )} </div>
- Progress bar accessibility: Add ARIA attributes for screen readers:
<div className="h-2 flex-1 rounded-full bg-gray-300 dark:bg-gray-600"> <div className="h-full rounded-full bg-blue-400" style={{ width: `${milestone.progress}%` }} + role="progressbar" + aria-valuenow={milestone.progress} + aria-valuemin={0} + aria-valuemax={100} + aria-label={`${milestone.title} progress`} /> </div>
- Mobile responsiveness: Consider adjusting the alternating layout for mobile:
className={`flex items-center gap-4 overflow-hidden rounded-lg bg-gray-200 p-6 dark:bg-gray-700 ${ - index % 2 === 0 ? '' : 'ml-8' + index % 2 === 0 ? '' : 'md:ml-8' }`}
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/apps/github/graphql/nodes/milestone.py(1 hunks)backend/tests/apps/github/graphql/nodes/milestone_test.py(1 hunks)frontend/src/app/about/page.tsx(2 hunks)frontend/src/server/queries/projectQueries.ts(1 hunks)frontend/src/types/project.ts(1 hunks)frontend/src/utils/aboutData.ts(0 hunks)
💤 Files with no reviewable changes (1)
- frontend/src/utils/aboutData.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/apps/github/graphql/nodes/milestone.py (1)
backend/apps/github/models/milestone.py (2)
Meta(17-20)Milestone(10-107)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Run frontend e2e tests
- GitHub Check: Run backend tests
- GitHub Check: Run frontend unit tests
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (7)
backend/tests/apps/github/graphql/nodes/milestone_test.py (1)
20-20: LGTM! Test correctly updated for new schema fields.The test properly includes the new
bodyandprogressfields in the expected fields set, ensuring the GraphQL schema changes are correctly validated.Also applies to: 25-25
frontend/src/types/project.ts (1)
39-39: LGTM! Type definitions correctly updated for new milestone fields.The addition of
body: stringandprogress: numberproperties properly aligns with the backend GraphQL schema changes and provides type safety for the enhanced milestone data.Also applies to: 43-43
frontend/src/server/queries/projectQueries.ts (1)
106-111: LGTM! GraphQL query correctly structured for milestone data.The
recentMilestonesquery properly fetches the required fields (title,url,body,progress) needed for the dynamic roadmap visualization in the frontend.frontend/src/app/about/page.tsx (1)
21-21: LGTM! Import correctly updated for dynamic data approach.The removal of the static
roadmapimport aligns with the shift to using dynamic milestone data from the GraphQL query.backend/apps/github/graphql/nodes/milestone.py (3)
13-13: LGTM: Progress field addition is appropriate.The addition of the
progressfield asgraphene.Float()is correctly implemented for representing milestone completion percentage.
21-21: LGTM: Body field exposure is correct.Adding
"body"to the Meta fields tuple properly exposes the milestone description field from the underlying model.
40-42: LGTM: Repository name resolver is correctly implemented.The
resolve_repository_namemethod logic is sound and follows the same pattern as the existingresolve_organization_namemethod.
* Add total_members_count field to Workspace model and update Slack sync command * Update pnpm-lock.yaml * Update member details page title * Update member details page layout * Update CONTRIBUTING.md * Update CONTRIBUTING.md formatting * Add workspace stats to the stats node and query * Update frontend * Refactore conterData * Update frontend tests * Fix spelling * Apply make-check * Update frontend tests * Use forEach instead of map * Apply coderabbitai suggestion * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 5
🛑 Comments failed to post (5)
frontend/__tests__/e2e/pages/About.spec.ts (1)
73-77: 💡 Verification agent
🧩 Analysis chain
Enhanced test coverage looks good, but verify progress text format.
The updated test provides much better coverage by validating individual milestone properties instead of generic list presence. However, ensure that the progress text format
Progress: ${milestone.progress}matches exactly what's rendered in the actual component.
🏁 Script executed:
#!/bin/bash # Description: Verify the progress text format used in About page component # Expected: Find the exact format used for displaying milestone progress # Search for progress display patterns in the About page component rg -A 3 -B 3 "Progress.*milestone\.progress|progress.*%" frontend/src/app/about/Length of output: 1111
Include percent sign in progress assertion
The component renders the progress with a trailing “%” (e.g.
Progress: 50%), so the test should assert against that exact string.Locations to update:
- frontend/tests/e2e/pages/About.spec.ts (around lines 73–77)
Suggested diff:
- for (const milestone of mockAboutData.project.recentMilestones) { - await expect(page.getByText(milestone.title)).toBeVisible() - await expect(page.getByText(milestone.body)).toBeVisible() - await expect(page.getByText(`Progress: ${milestone.progress}`)).toBeVisible() - } + for (const milestone of mockAboutData.project.recentMilestones) { + await expect(page.getByText(milestone.title)).toBeVisible() + await expect(page.getByText(milestone.body)).toBeVisible() + await expect(page.getByText(`Progress: ${milestone.progress}%`)).toBeVisible() + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.for (const milestone of mockAboutData.project.recentMilestones) { await expect(page.getByText(milestone.title)).toBeVisible() await expect(page.getByText(milestone.body)).toBeVisible() await expect(page.getByText(`Progress: ${milestone.progress}%`)).toBeVisible() }🤖 Prompt for AI Agents
In frontend/__tests__/e2e/pages/About.spec.ts around lines 73 to 77, the test assertion for milestone progress text is missing the trailing percent sign (%) that the component actually renders. Update the assertion to expect the string formatted as `Progress: ${milestone.progress}%` to match the exact output in the UI.frontend/__tests__/unit/pages/About.test.tsx (1)
263-274:
⚠️ Potential issueEnhanced test coverage with potential format inconsistency.
The improved roadmap test provides excellent coverage of individual milestone properties. However, there's an inconsistency: this test expects
Progress: ${milestone.progress}%(line 272) while the e2e test expectsProgress: ${milestone.progress}without the percent sign.Please verify and align the progress text format between unit and e2e tests:
- Unit test (line 272):
Progress: ${milestone.progress}%- E2E test (About.spec.ts line 76):
Progress: ${milestone.progress}One of these formats needs to be corrected to match the actual component rendering.
🤖 Prompt for AI Agents
In frontend/__tests__/unit/pages/About.test.tsx around lines 263 to 274, the unit test expects the progress text to include a percent sign (`Progress: ${milestone.progress}%`), but the e2e test expects it without the percent sign. Review the actual component rendering to determine the correct format and update either the unit test or the e2e test to ensure both tests expect the same progress text format, either with or without the percent sign, for consistency.backend/apps/common/utils.py (1)
104-115: 🛠️ Refactor suggestion
Add input validation for edge cases.
The
round_downfunction implementation is correct for positive integers, but consider adding validation for edge cases:
- Division by zero: When
base = 0, the modulo operation will raise aZeroDivisionError- Negative values: The function behavior with negative inputs may be unexpected
Consider adding input validation:
def round_down(value: int, base: int) -> int: """Round down the stats to the nearest base. Args: value: The value to round down. base: The base to round down to. Returns: int: The rounded down value. + Raises: + ValueError: If base is zero or negative. """ + if base <= 0: + raise ValueError("Base must be a positive integer") return value - (value % base)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def round_down(value: int, base: int) -> int: """Round down the stats to the nearest base. Args: value: The value to round down. base: The base to round down to. Returns: int: The rounded down value. Raises: ValueError: If base is zero or negative. """ if base <= 0: raise ValueError("Base must be a positive integer") return value - (value % base)🤖 Prompt for AI Agents
In backend/apps/common/utils.py around lines 104 to 115, the round_down function lacks input validation for edge cases such as base being zero, which causes a ZeroDivisionError, and handling of negative values which may produce unexpected results. Add checks to raise appropriate exceptions if base is zero and define clear behavior or raise errors for negative value or base inputs to ensure the function handles these edge cases safely and predictably.backend/apps/slack/models/workspace.py (1)
25-33: 🛠️ Refactor suggestion
Improve robustness of the default workspace method.
The current implementation has potential issues:
- No null safety:
objects.first()returnsNoneif no workspaces exist- Unpredictable ordering: Without
order_by(), the "first" workspace is database-dependentConsider this more robust implementation:
@staticmethod -def get_default_workspace() -> "Workspace": +def get_default_workspace() -> "Workspace | None": """Get the default workspace. Returns: - Workspace: The default workspace. + Workspace | None: The default workspace, or None if no workspaces exist. """ - return Workspace.objects.first() + return Workspace.objects.order_by('created_at').first()Or if a workspace must always exist, add validation:
@staticmethod def get_default_workspace() -> "Workspace": """Get the default workspace. Returns: Workspace: The default workspace. + Raises: + Workspace.DoesNotExist: If no workspaces exist. """ - return Workspace.objects.first() + workspace = Workspace.objects.order_by('created_at').first() + if workspace is None: + raise Workspace.DoesNotExist("No workspaces found") + return workspace📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.@staticmethod def get_default_workspace() -> "Workspace | None": """Get the default workspace. Returns: Workspace | None: The default workspace, or None if no workspaces exist. """ return Workspace.objects.order_by('created_at').first()🤖 Prompt for AI Agents
In backend/apps/slack/models/workspace.py around lines 25 to 33, the get_default_workspace method lacks null safety and does not specify ordering, which can lead to unpredictable results or None returns. To fix this, modify the method to include an explicit order_by clause to ensure consistent ordering and add a check to raise an exception or handle the case when no workspace exists, ensuring the method never returns None unexpectedly.frontend/__tests__/unit/pages/Home.test.tsx (1)
275-281:
⚠️ Potential issueFix the async testing pattern - setTimeout inside waitFor is incorrect.
The current implementation uses
setTimeoutinsidewaitFor, which is problematic becausewaitForexpects a condition that will eventually resolve, butsetTimeoutdoesn't return a promise thatwaitForcan wait for. This could lead to flaky tests or false positives.Replace the setTimeout approach with a proper async testing pattern:
- await waitFor(() => { - headers.forEach((header) => expect(screen.getByText(header)).toBeInTheDocument()) - // Wait for 2 seconds - setTimeout(() => { - Object.values(stats).forEach((value) => - expect(screen.getByText(`${millify(value)}+`)).toBeInTheDocument() - ) - }, 2000) - }) + await waitFor(() => { + headers.forEach((header) => expect(screen.getByText(header)).toBeInTheDocument()) + }) + + // Wait for animated counters to finish + await waitFor(() => { + Object.values(stats).forEach((value) => + expect(screen.getByText(`${millify(value)}+`)).toBeInTheDocument() + ) + }, { timeout: 3000 })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.await waitFor(() => { headers.forEach((header) => expect(screen.getByText(header)).toBeInTheDocument() ) }) // Wait for animated counters to finish await waitFor(() => { Object.values(stats).forEach((value) => expect(screen.getByText(`${millify(value)}+`)).toBeInTheDocument() ) }, { timeout: 3000 })🤖 Prompt for AI Agents
In frontend/__tests__/unit/pages/Home.test.tsx around lines 275 to 281, the test uses setTimeout inside waitFor, which is incorrect because waitFor expects a condition to be awaited, not a timer. Remove the setTimeout and instead use waitFor with an async callback that checks for the presence of the expected elements directly, allowing the test to properly wait for the UI updates without relying on arbitrary delays.
kasya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love this!
A couple of requests ⬇️ :
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
frontend/src/app/about/page.tsx (1)
168-170: Add optional chaining for safer data access.Consider adding optional chaining to handle potential missing milestone data gracefully.
- {[...projectMetadata.recentMilestones] + {[...(projectMetadata.recentMilestones || [])] .sort((a, b) => (a.title > b.title ? 1 : -1)) .map((milestone, index) => (Also consider adding default values for milestone properties:
- <h3 className="mb-2 text-xl font-semibold text-blue-400"> - {milestone.title} + <h3 className="mb-2 text-xl font-semibold text-blue-400"> + {milestone?.title || 'Untitled Milestone'}
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
frontend/src/app/about/page.tsx(3 hunks)frontend/src/server/queries/projectQueries.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/src/server/queries/projectQueries.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Run frontend unit tests
- GitHub Check: Run backend tests
- GitHub Check: Run frontend e2e tests
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (3)
frontend/src/app/about/page.tsx (3)
4-6: LGTM! Icon imports align with requirements.The new FontAwesome icons provide clear visual distinction for milestone status states and align with the past review feedback.
13-14: Component imports look good.The FontAwesome and Tooltip imports are properly added to support the new milestone visualization.
167-211: Excellent implementation of past review feedback, with minor alignment consideration.The refactored roadmap successfully addresses the previous review comments:
- ✅ Removed progress bars and percentages
- ✅ Added appropriate icons (user-gear for in-progress, check for completed, clock for not started)
- ✅ Implemented tooltips with clear status text
However, consider if the current flex layout provides the consistent alignment mentioned in past review comments. The current implementation uses
flex items-center gap-4which should provide good alignment, but you may want to verify this addresses kasya's alignment concerns.
arkid15r
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|


Resolves #1361
I preferred to add the milestone description in this case. I also removed a lot of non-necessary frontend code
