Skip to content

Conversation

@ahmedxgouda
Copy link
Collaborator

@ahmedxgouda ahmedxgouda commented May 24, 2025

Resolves #1361

I preferred to add the milestone description in this case. I also removed a lot of non-necessary frontend code
image

@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 24, 2025

Summary by CodeRabbit

  • New Features

    • The About page now displays a dynamic roadmap section with milestone cards, including milestone titles, descriptions, clickable links, and visual progress indicators.
    • Milestone progress is shown as a percentage and represented with icons and tooltips for status (Completed, In Progress, Not Started).
  • Bug Fixes

    • Roadmap tests updated to verify the presence of milestone titles and descriptions, ensuring accurate display.
  • Refactor

    • Simplified data flow in the Members page by removing unnecessary formatting of issues, pull requests, releases, and milestones.
  • Chores

    • Updated test data and queries to support recent milestone fields and progress indicators.
      """

Summary by CodeRabbit

  • New Features

    • The About page now displays recent project milestones as detailed cards, including milestone titles, descriptions, and progress bars.
    • Milestone progress is shown as a percentage, providing a clearer view of project advancement.
  • Improvements

    • Milestone information on member detail pages now includes descriptive text and progress.
    • Enhanced test coverage and mock data to reflect the new milestone fields and presentation.
  • Bug Fixes

    • Updated tests to accurately verify the rendering of milestone details on the About page.

Walkthrough

The changes refactor the About page's roadmap section to display recent project milestones dynamically, using data from the backend via GraphQL. Backend GraphQL nodes and queries were updated to expose milestone body and progress fields. Frontend components, types, queries, and tests were modified to consume and render this enhanced milestone data in a visually structured manner.

Changes

Files/Groups Change Summary
backend/apps/github/graphql/nodes/milestone.py, backend/tests/apps/github/graphql/nodes/milestone_test.py Added progress (Float) and body fields to MilestoneNode; implemented resolver for progress; updated tests to expect new fields.
frontend/src/app/about/page.tsx, frontend/src/utils/aboutData.ts Refactored roadmap section to render dynamic milestone cards from backend data; removed static roadmap array and its usage.
frontend/src/server/queries/projectQueries.ts, frontend/src/types/project.ts Extended GraphQL query and types to include recentMilestones with title, url, body, and progress.
frontend/tests/unit/data/mockAboutData.ts, frontend/tests/unit/pages/About.test.tsx, frontend/tests/e2e/pages/About.spec.ts Updated mocks and tests to use and verify new milestone data structure and rendering; switched tests from static to data-driven checks.
frontend/src/app/members/[memberKey]/page.tsx Removed useMemo formatting; passed raw milestone data directly to components without author enrichment.

Assessment against linked issues

Objective Addressed Explanation
Refactor the About page roadmap section for better structure and visual appeal using dynamic milestone data (#1361)
Extend backend to expose milestone body and progress for use in roadmap component (#1361)
Update frontend types, queries, and tests to handle new milestone fields and rendering (#1361)

Suggested reviewers

  • kasya
  • arkid15r

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0ca30dc and 61b0282.

📒 Files selected for processing (2)
  • frontend/__tests__/unit/pages/About.test.tsx (20 hunks)
  • frontend/src/app/about/page.tsx (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • frontend/src/app/about/page.tsx
  • frontend/tests/unit/pages/About.test.tsx
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run backend tests
  • GitHub Check: CodeQL (javascript-typescript)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
frontend/src/app/about/page.tsx (1)

162-194: Great implementation of dynamic milestone visualization with some suggestions for improvement.

The refactor from static roadmap to dynamic milestone cards is well-executed with proper styling, progress visualization, and responsive design. However, consider these improvements:

Suggestions for enhancement:

  1. Empty state handling: Add a fallback when no milestones are available:
<div className="grid gap-4">
+ {projectMetadata.recentMilestones.length === 0 ? (
+   <p className="text-gray-500 dark:text-gray-400">No recent milestones available.</p>
+ ) : (
  {projectMetadata.recentMilestones.map((milestone, index) => (
    // ... existing card content
  ))}
+ )}
</div>
  1. Progress bar accessibility: Add ARIA attributes for screen readers:
<div className="h-2 flex-1 rounded-full bg-gray-300 dark:bg-gray-600">
  <div
    className="h-full rounded-full bg-blue-400"
    style={{ width: `${milestone.progress}%` }}
+   role="progressbar"
+   aria-valuenow={milestone.progress}
+   aria-valuemin={0}
+   aria-valuemax={100}
+   aria-label={`${milestone.title} progress`}
  />
</div>
  1. Mobile responsiveness: Consider adjusting the alternating layout for mobile:
className={`flex items-center gap-4 overflow-hidden rounded-lg bg-gray-200 p-6 dark:bg-gray-700 ${
- index % 2 === 0 ? '' : 'ml-8'
+ index % 2 === 0 ? '' : 'md:ml-8'
}`}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 017fbaa and 0f7e79e.

📒 Files selected for processing (6)
  • backend/apps/github/graphql/nodes/milestone.py (1 hunks)
  • backend/tests/apps/github/graphql/nodes/milestone_test.py (1 hunks)
  • frontend/src/app/about/page.tsx (2 hunks)
  • frontend/src/server/queries/projectQueries.ts (1 hunks)
  • frontend/src/types/project.ts (1 hunks)
  • frontend/src/utils/aboutData.ts (0 hunks)
💤 Files with no reviewable changes (1)
  • frontend/src/utils/aboutData.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/apps/github/graphql/nodes/milestone.py (1)
backend/apps/github/models/milestone.py (2)
  • Meta (17-20)
  • Milestone (10-107)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run backend tests
  • GitHub Check: Run frontend unit tests
  • GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (7)
backend/tests/apps/github/graphql/nodes/milestone_test.py (1)

20-20: LGTM! Test correctly updated for new schema fields.

The test properly includes the new body and progress fields in the expected fields set, ensuring the GraphQL schema changes are correctly validated.

Also applies to: 25-25

frontend/src/types/project.ts (1)

39-39: LGTM! Type definitions correctly updated for new milestone fields.

The addition of body: string and progress: number properties properly aligns with the backend GraphQL schema changes and provides type safety for the enhanced milestone data.

Also applies to: 43-43

frontend/src/server/queries/projectQueries.ts (1)

106-111: LGTM! GraphQL query correctly structured for milestone data.

The recentMilestones query properly fetches the required fields (title, url, body, progress) needed for the dynamic roadmap visualization in the frontend.

frontend/src/app/about/page.tsx (1)

21-21: LGTM! Import correctly updated for dynamic data approach.

The removal of the static roadmap import aligns with the shift to using dynamic milestone data from the GraphQL query.

backend/apps/github/graphql/nodes/milestone.py (3)

13-13: LGTM: Progress field addition is appropriate.

The addition of the progress field as graphene.Float() is correctly implemented for representing milestone completion percentage.


21-21: LGTM: Body field exposure is correct.

Adding "body" to the Meta fields tuple properly exposes the milestone description field from the underlying model.


40-42: LGTM: Repository name resolver is correctly implemented.

The resolve_repository_name method logic is sound and follows the same pattern as the existing resolve_organization_name method.

ahmedxgouda and others added 2 commits May 24, 2025 13:04
* Add total_members_count field to Workspace model and update Slack sync command

* Update pnpm-lock.yaml

* Update member details page title

* Update member details page layout

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md formatting

* Add workspace stats to the stats node and query

* Update frontend

* Refactore conterData

* Update frontend tests

* Fix spelling

* Apply make-check

* Update frontend tests

* Use forEach instead of map

* Apply coderabbitai suggestion

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
@github-actions github-actions bot added backend and removed nestbot labels May 24, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 5

🛑 Comments failed to post (5)
frontend/__tests__/e2e/pages/About.spec.ts (1)

73-77: 💡 Verification agent

🧩 Analysis chain

Enhanced test coverage looks good, but verify progress text format.

The updated test provides much better coverage by validating individual milestone properties instead of generic list presence. However, ensure that the progress text format Progress: ${milestone.progress} matches exactly what's rendered in the actual component.


🏁 Script executed:

#!/bin/bash
# Description: Verify the progress text format used in About page component
# Expected: Find the exact format used for displaying milestone progress

# Search for progress display patterns in the About page component
rg -A 3 -B 3 "Progress.*milestone\.progress|progress.*%" frontend/src/app/about/

Length of output: 1111


Include percent sign in progress assertion

The component renders the progress with a trailing “%” (e.g. Progress: 50%), so the test should assert against that exact string.

Locations to update:

  • frontend/tests/e2e/pages/About.spec.ts (around lines 73–77)

Suggested diff:

-   for (const milestone of mockAboutData.project.recentMilestones) {
-     await expect(page.getByText(milestone.title)).toBeVisible()
-     await expect(page.getByText(milestone.body)).toBeVisible()
-     await expect(page.getByText(`Progress: ${milestone.progress}`)).toBeVisible()
-   }
+   for (const milestone of mockAboutData.project.recentMilestones) {
+     await expect(page.getByText(milestone.title)).toBeVisible()
+     await expect(page.getByText(milestone.body)).toBeVisible()
+     await expect(page.getByText(`Progress: ${milestone.progress}%`)).toBeVisible()
+   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    for (const milestone of mockAboutData.project.recentMilestones) {
      await expect(page.getByText(milestone.title)).toBeVisible()
      await expect(page.getByText(milestone.body)).toBeVisible()
      await expect(page.getByText(`Progress: ${milestone.progress}%`)).toBeVisible()
    }
🤖 Prompt for AI Agents
In frontend/__tests__/e2e/pages/About.spec.ts around lines 73 to 77, the test
assertion for milestone progress text is missing the trailing percent sign (%)
that the component actually renders. Update the assertion to expect the string
formatted as `Progress: ${milestone.progress}%` to match the exact output in the
UI.
frontend/__tests__/unit/pages/About.test.tsx (1)

263-274: ⚠️ Potential issue

Enhanced test coverage with potential format inconsistency.

The improved roadmap test provides excellent coverage of individual milestone properties. However, there's an inconsistency: this test expects Progress: ${milestone.progress}% (line 272) while the e2e test expects Progress: ${milestone.progress} without the percent sign.

Please verify and align the progress text format between unit and e2e tests:

  • Unit test (line 272): Progress: ${milestone.progress}%
  • E2E test (About.spec.ts line 76): Progress: ${milestone.progress}

One of these formats needs to be corrected to match the actual component rendering.

🤖 Prompt for AI Agents
In frontend/__tests__/unit/pages/About.test.tsx around lines 263 to 274, the
unit test expects the progress text to include a percent sign (`Progress:
${milestone.progress}%`), but the e2e test expects it without the percent sign.
Review the actual component rendering to determine the correct format and update
either the unit test or the e2e test to ensure both tests expect the same
progress text format, either with or without the percent sign, for consistency.
backend/apps/common/utils.py (1)

104-115: 🛠️ Refactor suggestion

Add input validation for edge cases.

The round_down function implementation is correct for positive integers, but consider adding validation for edge cases:

  1. Division by zero: When base = 0, the modulo operation will raise a ZeroDivisionError
  2. Negative values: The function behavior with negative inputs may be unexpected

Consider adding input validation:

def round_down(value: int, base: int) -> int:
    """Round down the stats to the nearest base.

    Args:
        value: The value to round down.
        base: The base to round down to.

    Returns:
        int: The rounded down value.

+    Raises:
+        ValueError: If base is zero or negative.
    """
+    if base <= 0:
+        raise ValueError("Base must be a positive integer")
    return value - (value % base)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def round_down(value: int, base: int) -> int:
    """Round down the stats to the nearest base.

    Args:
        value: The value to round down.
        base: The base to round down to.

    Returns:
        int: The rounded down value.

    Raises:
        ValueError: If base is zero or negative.
    """
    if base <= 0:
        raise ValueError("Base must be a positive integer")
    return value - (value % base)
🤖 Prompt for AI Agents
In backend/apps/common/utils.py around lines 104 to 115, the round_down function
lacks input validation for edge cases such as base being zero, which causes a
ZeroDivisionError, and handling of negative values which may produce unexpected
results. Add checks to raise appropriate exceptions if base is zero and define
clear behavior or raise errors for negative value or base inputs to ensure the
function handles these edge cases safely and predictably.
backend/apps/slack/models/workspace.py (1)

25-33: 🛠️ Refactor suggestion

Improve robustness of the default workspace method.

The current implementation has potential issues:

  1. No null safety: objects.first() returns None if no workspaces exist
  2. Unpredictable ordering: Without order_by(), the "first" workspace is database-dependent

Consider this more robust implementation:

 @staticmethod
-def get_default_workspace() -> "Workspace":
+def get_default_workspace() -> "Workspace | None":
     """Get the default workspace.

     Returns:
-        Workspace: The default workspace.
+        Workspace | None: The default workspace, or None if no workspaces exist.

     """
-    return Workspace.objects.first()
+    return Workspace.objects.order_by('created_at').first()

Or if a workspace must always exist, add validation:

 @staticmethod
 def get_default_workspace() -> "Workspace":
     """Get the default workspace.

     Returns:
         Workspace: The default workspace.

+    Raises:
+        Workspace.DoesNotExist: If no workspaces exist.
     """
-    return Workspace.objects.first()
+    workspace = Workspace.objects.order_by('created_at').first()
+    if workspace is None:
+        raise Workspace.DoesNotExist("No workspaces found")
+    return workspace
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    @staticmethod
    def get_default_workspace() -> "Workspace | None":
        """Get the default workspace.

        Returns:
            Workspace | None: The default workspace, or None if no workspaces exist.

        """
        return Workspace.objects.order_by('created_at').first()
🤖 Prompt for AI Agents
In backend/apps/slack/models/workspace.py around lines 25 to 33, the
get_default_workspace method lacks null safety and does not specify ordering,
which can lead to unpredictable results or None returns. To fix this, modify the
method to include an explicit order_by clause to ensure consistent ordering and
add a check to raise an exception or handle the case when no workspace exists,
ensuring the method never returns None unexpectedly.
frontend/__tests__/unit/pages/Home.test.tsx (1)

275-281: ⚠️ Potential issue

Fix the async testing pattern - setTimeout inside waitFor is incorrect.

The current implementation uses setTimeout inside waitFor, which is problematic because waitFor expects a condition that will eventually resolve, but setTimeout doesn't return a promise that waitFor can wait for. This could lead to flaky tests or false positives.

Replace the setTimeout approach with a proper async testing pattern:

-    await waitFor(() => {
-      headers.forEach((header) => expect(screen.getByText(header)).toBeInTheDocument())
-      // Wait for 2 seconds
-      setTimeout(() => {
-        Object.values(stats).forEach((value) =>
-          expect(screen.getByText(`${millify(value)}+`)).toBeInTheDocument()
-        )
-      }, 2000)
-    })
+    await waitFor(() => {
+      headers.forEach((header) => expect(screen.getByText(header)).toBeInTheDocument())
+    })
+    
+    // Wait for animated counters to finish
+    await waitFor(() => {
+      Object.values(stats).forEach((value) =>
+        expect(screen.getByText(`${millify(value)}+`)).toBeInTheDocument()
+      )
+    }, { timeout: 3000 })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    await waitFor(() => {
      headers.forEach((header) =>
        expect(screen.getByText(header)).toBeInTheDocument()
      )
    })

    // Wait for animated counters to finish
    await waitFor(() => {
      Object.values(stats).forEach((value) =>
        expect(screen.getByText(`${millify(value)}+`)).toBeInTheDocument()
      )
    }, { timeout: 3000 })
🤖 Prompt for AI Agents
In frontend/__tests__/unit/pages/Home.test.tsx around lines 275 to 281, the test
uses setTimeout inside waitFor, which is incorrect because waitFor expects a
condition to be awaited, not a timer. Remove the setTimeout and instead use
waitFor with an async callback that checks for the presence of the expected
elements directly, allowing the test to properly wait for the UI updates without
relying on arbitrary delays.

@ahmedxgouda ahmedxgouda marked this pull request as ready for review May 24, 2025 10:35
Copy link
Collaborator

@kasya kasya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love this!
A couple of requests ⬇️ :

@github-actions github-actions bot added the docker Pull requests that update Docker code label May 24, 2025
@github-actions github-actions bot removed the docker Pull requests that update Docker code label May 24, 2025
@ahmedxgouda ahmedxgouda requested a review from kasya May 24, 2025 21:23
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
frontend/src/app/about/page.tsx (1)

168-170: Add optional chaining for safer data access.

Consider adding optional chaining to handle potential missing milestone data gracefully.

- {[...projectMetadata.recentMilestones]
+ {[...(projectMetadata.recentMilestones || [])]
    .sort((a, b) => (a.title > b.title ? 1 : -1))
    .map((milestone, index) => (

Also consider adding default values for milestone properties:

- <h3 className="mb-2 text-xl font-semibold text-blue-400">
-   {milestone.title}
+ <h3 className="mb-2 text-xl font-semibold text-blue-400">
+   {milestone?.title || 'Untitled Milestone'}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4660c46 and 0ca30dc.

📒 Files selected for processing (2)
  • frontend/src/app/about/page.tsx (3 hunks)
  • frontend/src/server/queries/projectQueries.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/src/server/queries/projectQueries.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run backend tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (3)
frontend/src/app/about/page.tsx (3)

4-6: LGTM! Icon imports align with requirements.

The new FontAwesome icons provide clear visual distinction for milestone status states and align with the past review feedback.


13-14: Component imports look good.

The FontAwesome and Tooltip imports are properly added to support the new milestone visualization.


167-211: Excellent implementation of past review feedback, with minor alignment consideration.

The refactored roadmap successfully addresses the previous review comments:

  • ✅ Removed progress bars and percentages
  • ✅ Added appropriate icons (user-gear for in-progress, check for completed, clock for not started)
  • ✅ Implemented tooltips with clear status text

However, consider if the current flex layout provides the consistent alignment mentioned in past review comments. The current implementation uses flex items-center gap-4 which should provide good alignment, but you may want to verify this addresses kasya's alignment concerns.

Copy link
Collaborator

@arkid15r arkid15r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@arkid15r arkid15r enabled auto-merge May 24, 2025 23:51
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
13.9% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

@arkid15r arkid15r added this pull request to the merge queue May 25, 2025
Merged via the queue into OWASP:main with commit 8b4cb61 May 25, 2025
22 of 23 checks passed
@ahmedxgouda ahmedxgouda deleted the feature/implement-roadmap branch June 15, 2025 07:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Refactor the Roadmap Section on About Page to Enhance Visual Appeal and Structure

3 participants