Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow matching empty frames in quality checks #8652

Merged
merged 28 commits into from
Nov 11, 2024

Conversation

zhiltsov-max
Copy link
Contributor

@zhiltsov-max zhiltsov-max commented Nov 6, 2024

Motivation and context

Depends on #8634

Added a quality check option to consider frames matching, if both GT and job annotations have no annotations on a frame. This affects quality metrics and total counts in reports, but confusion matrices stay unchanged. This allows to use both positive and negative validation frames.

How has this been tested?

Unit tests

Checklist

  • I submit my changes into the develop branch
  • I have created a changelog fragment
  • I have updated the documentation accordingly
  • I have added tests to cover my changes
  • I have linked related issues (see GitHub docs)
  • I have increased versions of npm packages if it is necessary
    (cvat-canvas,
    cvat-core,
    cvat-data and
    cvat-ui)

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a quality setting for comparing point groups without bounding boxes.
    • Added an option to consider empty frames as matching in quality checks.
    • Enhanced quality settings form with new options for matchEmptyFrames and useBboxSizeForPoints.
  • Bug Fixes

    • Corrected property name from peoject_id to project_id in the API quality reports filter.
  • Documentation

    • Updated API schemas to include new quality settings properties.

These changes improve the flexibility and accuracy of quality assessments within the application.

Copy link
Contributor

coderabbitai bot commented Nov 6, 2024

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The changes introduced enhancements to the quality settings in the CVAT system, adding options for comparing point groups without bounding boxes and including empty frames in quality checks. New properties were added to various components, including models, serializers, and UI forms, to support these features. Additionally, migrations were created to update the database schema, and tests were modified to validate the new settings. The OpenAPI schema was also updated to reflect these changes.

Changes

File Change Summary
changelog.d/20241101_140759_mzhiltso_compare_point_groups_in_image_space.md Added new quality setting for comparing point groups without bounding boxes.
changelog.d/20241106_170626_mzhiltso_match_empty_frames.md Introduced quality check option for matching empty frames.
cvat-core/src/quality-settings.ts Added properties #useBboxSizeForPoints and #matchEmptyFrames with getters and setters; updated toJSON method.
cvat-core/src/server-response-types.ts Added properties use_bbox_size_for_points and match_empty_frames to SerializedQualitySettingsData; corrected peoject_id to project_id.
cvat-ui/src/components/quality-control/quality-control-page.tsx Integrated new settings matchEmptyFrames and useBboxSizeForPoints into quality settings configuration.
cvat-ui/src/components/quality-control/task-quality/quality-settings-form.tsx Added new properties to initialValues and introduced new form items for quality settings.
cvat/apps/quality_control/migrations/0004_qualitysettings_use_bbox_size_for_points.py Added use_bbox_size_for_points field to qualitysettings model with default value True.
cvat/apps/quality_control/migrations/0005_qualitysettings_match_empty.py Added match_empty_frames field to qualitysettings model with default value False.
cvat/apps/quality_control/models.py Updated QualitySettings class to include new fields use_bbox_size_for_points and match_empty_frames.
cvat/apps/quality_control/quality_reports.py Added new parameters to ComparisonParameters for quality checks; updated report generation logic for empty frames.
cvat/apps/quality_control/serializers.py Added new fields use_bbox_size_for_points and match_empty_frames to QualitySettingsSerializer.
cvat/schema.yml Updated OpenAPI schema to include new properties in QualitySettings and related schemas.
tests/python/rest_api/test_quality_control.py Enhanced tests to include new parameters for quality settings in assertions.
tests/python/shared/assets/quality_settings.json Updated results array to include new properties match_empty_frames and use_bbox_size_for_points for all objects.

Poem

In the fields where quality blooms,
New settings dance, dispelling glooms.
Empty frames now find their place,
Bounding boxes, a gentle grace.
With each change, our work refined,
In harmony, our goals aligned! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Outside diff range and nitpick comments (14)
changelog.d/20241106_170626_mzhiltso_match_empty_frames.md (1)

1-4: Enhance the changelog entry with more details.

While the entry follows the correct format, it would be more helpful to users if it included additional context and impact details.

Consider expanding the entry like this:

 ### Added
 
-A quality check option to consider empty frames matching
-(<https://github.com/cvat-ai/cvat/pull/8652>)
+- Added a quality check option to consider empty frames as matching in validation
+  - Allows including both positive and negative validation frames in quality assessment
+  - Affects quality metrics reporting without changing confusion matrices
+  - Requires changes from PR #8634
+  (<https://github.com/cvat-ai/cvat/pull/8652>)
cvat/apps/quality_control/serializers.py (1)

103-103: Add default value for use_bbox_size_for_points.

While match_empty_frames has a default value set to False, the use_bbox_size_for_points field is missing a default value. Consider adding one for consistency and to ensure predictable behavior.

 extra_kwargs.setdefault("match_empty_frames", {}).setdefault("default", False)
+extra_kwargs.setdefault("use_bbox_size_for_points", {}).setdefault("default", False)
cvat-core/src/quality-settings.ts (1)

47-47: Consider adding default values for new properties.

The constructor assumes initialData will always contain use_bbox_size_for_points and match_empty_frames. Consider adding default values to handle cases where these properties might be undefined.

-this.#useBboxSizeForPoints = initialData.use_bbox_size_for_points;
+this.#useBboxSizeForPoints = initialData.use_bbox_size_for_points ?? true;
-this.#matchEmptyFrames = initialData.match_empty_frames;
+this.#matchEmptyFrames = initialData.match_empty_frames ?? false;

Also applies to: 58-58

cvat-core/src/server-response-types.ts (1)

Line range hint 289-295: Fix typo in property name: peoject_idproject_id

There's a typo in the APIQualityReportsFilter interface where peoject_id should be project_id. This needs to be fixed as it's a breaking change that could cause runtime issues.

Apply this diff to fix the typo:

 export interface APIQualityReportsFilter extends APICommonFilterParams {
     parent_id?: number;
-    peoject_id?: number;
+    project_id?: number;
     task_id?: number;
     job_id?: number;
     target?: string;
 }
tests/python/shared/assets/quality_settings.json (1)

17-17: Maintain consistent property ordering across configurations.

The placement of the new properties (match_empty_frames and use_bbox_size_for_points) varies across different configuration objects. While this doesn't affect functionality, consistent property ordering improves maintainability and readability.

Consider standardizing the property order across all configuration objects, for example:

  1. Keep core settings together (thresholds, comparison flags)
  2. Group related properties (e.g., all matching-related properties)
  3. Place metadata fields (id, task_id) consistently at the start or end

Also applies to: 24-24, 25-25, 38-38, 45-45, 46-46, 59-59, 66-66, 67-67, 80-80, 87-87, 88-88, 101-101, 108-108, 109-109, 122-122, 129-129, 130-130, 143-143, 150-150, 151-151, 164-164, 171-171, 172-172, 185-185, 192-192, 193-193, 206-206, 213-213, 214-214, 227-227, 234-234, 235-235, 248-248, 255-255, 256-256, 269-269, 276-276, 277-277, 290-290, 297-297, 298-298, 311-311, 318-318, 319-319, 332-332, 339-339, 340-340, 353-353, 360-360, 361-361, 374-374, 381-381, 382-382, 395-395, 402-402, 403-403, 416-416, 423-423, 424-424, 437-437, 444-444, 445-445, 458-458, 465-465, 466-466, 479-479, 486-486, 487-487, 500-500, 507-507, 508-508

cvat-ui/src/components/quality-control/task-quality/quality-settings-form.tsx (1)

76-77: LGTM with a minor suggestion for consistency

The UI implementation for the new settings is well-structured and follows the established patterns. The tooltip integration and form layout are consistent with other sections.

Consider adding a descriptive label prop to the Form.Items for better accessibility, matching the pattern used in other numeric input fields in the form.

 <Form.Item
     name='compareAttributes'
+    label='Compare attributes'
     valuePropName='checked'
     rules={[{ required: true }]}
 >

 <Form.Item
     name='matchEmptyFrames'
+    label='Match empty frames'
     valuePropName='checked'
     rules={[{ required: true }]}
 >

Also applies to: 180-203

tests/python/shared/assets/cvat_db/data.json (1)

Line range hint 18191-18200: Consider using a test data generator.

The same configuration block is repeated multiple times in this test data file. To improve maintainability and reduce the risk of inconsistencies, consider:

  1. Creating a test data generator function
  2. Using parameterized test cases instead of duplicating the configuration blocks

This would make the test data more maintainable and easier to update when new configuration options are added.

Also applies to: 18215-18224, 18239-18248, 18263-18272, 18287-18296, 18311-18320, 18335-18344, 18359-18368, 18383-18392, 18407-18416, 18431-18440, 18455-18464, 18479-18488, 18503-18512, 18527-18536, 18551-18560, 18575-18584, 18599-18608, 18623-18632, 18647-18656, 18671-18680, 18695-18704, 18719-18728

cvat/apps/quality_control/quality_reports.py (7)

968-971: Ensure consistent default values for use_bbox_size_for_points.

The default value of use_bbox_size_for_points in the constructor is True, which should align with the default in ComparisonParameters.

Verify that default values are consistent across the codebase to prevent unexpected behavior.


1311-1321: Optimize scale calculation for point matching.

The current implementation recalculates scale redundantly. Consider simplifying the conditional logic to improve readability and performance.

Refactor the code as follows:

 if self.use_bbox_size_for_points and dm.ops.bbox_iou(a_bbox, b_bbox) > 0:
     bbox = dm.ops.mean_bbox([a_bbox, b_bbox])
     scale = bbox[2] * bbox[3]
 else:
     scale = img_h * img_w
     if dm.ops.bbox_iou(a_bbox, b_bbox) <= 0:
         # Early exit for non-overlapping bboxes
         return 0

Line range hint 2031-2055: Ensure variables are initialized before use.

Variables like valid_annotations_count, missing_annotations_count, and others might not be initialized if certain conditions are not met.

Confirm that all variables are properly initialized to prevent UnboundLocalError.


2074-2087: Remove redundant method _generate_frame_annotations_summary.

The method _generate_frame_annotations_summary duplicates the functionality of _compute_annotation_summary, leading to unnecessary code duplication.

Consider removing _generate_frame_annotations_summary and directly using _compute_annotation_summary where needed.


Line range hint 2088-2168: Handle division by zero in mean IoU calculation.

When calculating the mean IoU, there is a possibility of division by zero if mean_ious is empty.

Add a check to prevent division by zero:

 if mean_ious:
     annotation_components.shape.mean_iou = np.mean(mean_ious)
 else:
     annotation_components.shape.mean_iou = 0.0

1547-1549: Consider defaulting included_ann_types to a fixed set.

Currently, included_ann_types is set using set(self.included_ann_types) - {dm.AnnotationType.mask}. This might exclude masks unintentionally.

Review whether masks should be deliberately excluded or if this exclusion should be configurable.


Line range hint 1980-2087: Avoid modifying counts directly when matching empty frames.

Modifying counts like valid_labels_count directly might lead to inconsistencies in summary statistics.

Consider encapsulating the logic for handling empty frames within a dedicated method to maintain data integrity.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between a56e94b and 01d5a07.

📒 Files selected for processing (15)
  • changelog.d/20241101_140759_mzhiltso_compare_point_groups_in_image_space.md (1 hunks)
  • changelog.d/20241106_170626_mzhiltso_match_empty_frames.md (1 hunks)
  • cvat-core/src/quality-settings.ts (8 hunks)
  • cvat-core/src/server-response-types.ts (2 hunks)
  • cvat-ui/src/components/quality-control/quality-control-page.tsx (1 hunks)
  • cvat-ui/src/components/quality-control/task-quality/quality-settings-form.tsx (5 hunks)
  • cvat/apps/quality_control/migrations/0004_qualitysettings_use_bbox_size_for_points.py (1 hunks)
  • cvat/apps/quality_control/migrations/0005_qualitysettings_match_empty.py (1 hunks)
  • cvat/apps/quality_control/models.py (2 hunks)
  • cvat/apps/quality_control/quality_reports.py (13 hunks)
  • cvat/apps/quality_control/serializers.py (4 hunks)
  • cvat/schema.yml (4 hunks)
  • tests/python/rest_api/test_quality_control.py (2 hunks)
  • tests/python/shared/assets/cvat_db/data.json (48 hunks)
  • tests/python/shared/assets/quality_settings.json (24 hunks)
🔥 Files not summarized due to errors (1)
  • tests/python/shared/assets/cvat_db/data.json: Error: Server error: no LLM provider could handle the message
🔇 Additional comments (22)
cvat/apps/quality_control/migrations/0005_qualitysettings_match_empty.py (2)

8-10: LGTM! Dependencies are correctly specified.

The migration properly depends on 0004_qualitysettings_use_bbox_size_for_points, ensuring correct sequential application of database changes.


13-17: Verify integration with related quality control components.

The field addition looks good. The default value of False ensures backward compatibility with existing quality settings.

Let's verify the integration with related components:

✅ Verification successful

Integration with quality control components is properly implemented

The field match_empty_frames is well integrated across the quality control system:

  • Defined in the model with correct default value
  • Properly serialized with documentation
  • Implemented in quality reports logic for empty frame matching
  • Covered by tests verifying the impact on metrics
  • Used consistently in the codebase for its intended purpose
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the integration of match_empty_frames across the codebase

# Test 1: Check if the field is properly defined in the model
echo "Checking model definition..."
rg "match_empty_frames" "cvat/apps/quality_control/models.py"

# Test 2: Verify serializer implementation
echo "Checking serializer implementation..."
rg "match_empty_frames" "cvat/apps/quality_control/serializers.py"

# Test 3: Look for corresponding tests
echo "Checking test coverage..."
rg "match_empty_frames" -g "test_*.py"

# Test 4: Verify API schema updates
echo "Checking API schema..."
rg "match_empty_frames" -g "*.yaml"

Length of output: 858


Script:

#!/bin/bash
# Let's check the test file content and API implementation for better context

# Check test implementation details
echo "Checking test implementation details..."
rg -A 5 -B 5 "match_empty_frames" "tests/python/rest_api/test_quality_control.py"

# Check if there are any API views/viewsets handling this field
echo "Checking API implementation..."
rg -l "QualitySettings" "cvat/apps/quality_control/"

# Check the actual implementation of quality control logic
echo "Checking quality control implementation..."
rg "match_empty_frames" "cvat/apps/quality_control/" -g "!migrations/*"

Length of output: 2596

cvat/apps/quality_control/models.py (1)

223-224: Add documentation for empty frame matching.

The new match_empty_frames field implements the PR objective correctly with a safe default value, but lacks documentation explaining its purpose and implications for quality metrics.

Add a docstring:

-    match_empty_frames = models.BooleanField(default=False)
+    match_empty_frames = models.BooleanField(
+        default=False,
+        help_text="If True, considers empty frames as matching in quality metrics"
+    )

Let's verify the integration with quality check logic:

✅ Verification successful

Add docstring for empty frame matching and consider updating help text.

The field is well integrated with quality check logic in quality_reports.py and has proper test coverage, but could benefit from improved documentation:

-    match_empty_frames = models.BooleanField(default=False)
+    match_empty_frames = models.BooleanField(
+        default=False,
+        help_text="""
+            Consider empty frames (frames without annotations) as matching in quality metrics.
+            When enabled:
+            - Empty frames from both GT and DS will have accuracy=1
+            - Virtual annotations are added to empty frames in comparison results
+            When disabled:
+            - Empty frames have accuracy=0
+        """
+    )
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for usage of match_empty_frames in quality check implementation
rg -l "match_empty_frames" --type python

Length of output: 71


Script:

#!/bin/bash
# Search for usage of match_empty_frames without file type restriction
rg -l "match_empty_frames"

Length of output: 450


Script:

#!/bin/bash
# Search for quality check implementation files
fd "quality.*\.py$"

Length of output: 429


Script:

#!/bin/bash
# Search for the actual implementation context
rg -A 5 -B 5 "match_empty_frames"

Length of output: 55339

cvat/apps/quality_control/serializers.py (2)

84-84: LGTM: Fields properly added to the serializer.

The new fields use_bbox_size_for_points and match_empty_frames are correctly integrated into the serializer's fields list, following the established naming conventions.

Also applies to: 95-95


125-131: LGTM: Well-documented field descriptions.

The help text for both new fields is comprehensive and clear:

  • use_bbox_size_for_points: Clearly explains the point matching area calculation and its use cases.
  • match_empty_frames: Thoroughly documents the impact on accuracy metrics and the virtual annotations behavior.

Also applies to: 160-165

cvat-core/src/quality-settings.ts (3)

25-25: LGTM: Property declarations follow TypeScript best practices.

The new private properties are well-typed and follow consistent naming conventions.

Also applies to: 36-36


86-92: LGTM: Clean getter/setter implementation.

The implementation follows the established pattern in the codebase.


198-204: LGTM: Clean getter/setter implementation.

The implementation follows the established pattern in the codebase.

cvat-core/src/server-response-types.ts (1)

250-250: LGTM: New quality settings properties look good!

The new optional boolean properties use_bbox_size_for_points and match_empty_frames are well-placed and properly typed. They align with the PR objectives for enhancing quality checks.

Also applies to: 261-261

cvat-ui/src/components/quality-control/quality-control-page.tsx (3)

226-226: LGTM: Empty frames matching setting added

The addition of matchEmptyFrames aligns with the PR objective to allow matching empty frames in quality checks. The implementation correctly preserves the setting during the save operation.


226-229: Well-structured implementation of new quality settings

The integration of new settings (matchEmptyFrames and useBboxSizeForPoints) follows good practices:

  1. Consistent with existing settings management
  2. Proper error handling and loading states
  3. User feedback through notifications
  4. Clean integration with the existing save operation

229-229: Verify the impact of bounding box size setting for points

The addition of useBboxSizeForPoints setting looks correct, but let's verify its usage in related components.

✅ Verification successful

Let me gather more information about how this setting affects point comparisons by looking for the actual implementation.


Let me try one more search to find the implementation details.


Let me try one final search to find the quality comparison implementation.


Based on the gathered information, I can now provide a final response:

The useBboxSizeForPoints setting is properly integrated in the codebase

The setting is correctly implemented in the quality settings system:

  • Defined in the core quality settings type system (cvat-core/src/server-response-types.ts)
  • Properly handled in the quality settings class (cvat-core/src/quality-settings.ts)
  • Correctly propagated through the UI components

While the exact point comparison logic implementation isn't directly visible, the setting is properly integrated into the quality control infrastructure, following the same pattern as other quality control settings.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check where this setting is used and how it affects point comparisons
# Look for related UI components and tests
rg -g '*.{tsx,ts}' -A 3 'useBboxSizeForPoints'

# Look for any point comparison logic that might use this setting
ast-grep --pattern 'function $FUNC_NAME($PARAMS) {
  $$$
  point$$$
  $$$
}'

Length of output: 3654


Script:

#!/bin/bash
# Look for point comparison logic in quality-related files
rg -g '*.{ts,tsx}' -B 3 -A 10 'comparePoints|pointComparison|compareAnnotations.*point'

# Search for files that might contain point comparison implementation
fd -e ts -e tsx -e js | grep -i 'quality.*compar\|compar.*point'

Length of output: 156


Script:

#!/bin/bash
# Look for point comparison implementation in quality-related files
rg -g '*.{ts,tsx}' -B 3 -A 10 'point.*comparison|compare.*point|distance.*point' cvat-core/

# Search for files containing point-related quality checks
fd . cvat-core/ -t f -e ts -e tsx | xargs rg -l 'point.*quality|quality.*point'

Length of output: 11165


Script:

#!/bin/bash
# Look for quality comparison implementation
rg -g '*.{ts,tsx}' -B 5 -A 10 'compareQuality|qualityComparison|compareAnnotation' cvat-core/src/

# Search specifically in quality settings related files
rg -g '*.{ts,tsx}' -B 5 -A 10 'use_bbox_size_for_points' cvat-core/src/

Length of output: 4427

cvat-ui/src/components/quality-control/task-quality/quality-settings-form.tsx (2)

36-36: LGTM: Initial values setup is correct

The new boolean properties are properly initialized from the settings object, maintaining consistency with the existing pattern.

Also applies to: 39-39


96-99: Verify the impact of bbox size setting on point comparison

The UI implementation for the point comparison section is well-structured and consistent with other sections. However, since this setting affects point comparison without bounding boxes, we should verify its integration with the quality metrics calculation.

Also applies to: 284-307

✅ Verification successful

Point comparison setting is properly integrated with quality metrics

The implementation is well-integrated across the codebase:

  • The setting is properly defined in quality settings types and models
  • It's correctly implemented in the quality reports calculation logic where it affects the scale used for point comparison:
    • When disabled: uses image dimensions (img_h * img_w)
    • When enabled: uses bounding box dimensions (bbox[2] * bbox[3])
  • The setting is covered by quality control tests that verify its impact on metrics
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for related quality metric calculation code that uses this setting

# Look for files that might handle point comparison logic
echo "Searching for point comparison implementation files..."
rg -l "point.*compar|bbox.*size.*point" --type ts --type py

# Look for test files that validate this functionality
echo "Searching for related test files..."
rg -l "test.*point.*comparison|test.*bbox.*size" --type ts --type py

Length of output: 732


Script:

#!/bin/bash
# Let's examine the implementation details in key files

# Check quality settings implementation
echo "=== Quality Settings Implementation ==="
rg -A 10 "useBboxSizeForPoints|use_bbox_size_for_points" cvat-core/src/quality-settings.ts

# Check server response types
echo -e "\n=== Server Response Types ==="
rg -A 10 "useBboxSizeForPoints|use_bbox_size_for_points" cvat-core/src/server-response-types.ts

# Check quality control implementation
echo -e "\n=== Quality Control Implementation ==="
rg -A 10 "use_bbox_size_for_points" cvat/apps/quality_control/quality_reports.py

# Check test coverage
echo -e "\n=== Test Coverage ==="
rg -A 10 "use_bbox_size_for_points" tests/python/rest_api/test_quality_control.py

Length of output: 5937

cvat/schema.yml (2)

9664-9671: LGTM! Well-structured property definition for point group comparison

The use_bbox_size_for_points property is well-defined with appropriate type and clear documentation explaining its purpose and behavior in point group comparisons.


10216-10222: LGTM! Clear property definition for empty frame handling

The match_empty_frames property is well-defined with appropriate type, default value, and clear documentation explaining how it affects quality metrics for empty frames.

tests/python/shared/assets/cvat_db/data.json (1)

18167-18170: LGTM! Configuration structure looks correct.

The quality check configuration includes:

  1. Point comparison settings with appropriate defaults
  2. New match_empty_frames field defaulting to false, which aligns with the PR objective of making empty frame matching optional

Also applies to: 18176-18179

tests/python/rest_api/test_quality_control.py (2)

1214-1215: Parameters added for comprehensive testing

The parameters "use_bbox_size_for_points" and "match_empty_frames" have been added to the list of parameters in the test_settings_affect_metrics method. This inclusion ensures that the impact of these settings on the metrics is thoroughly tested.


1242-1247: Verify the assertion logic for 'match_empty_frames'

In the test_settings_affect_metrics method, when parameter == "match_empty_frames", the assertion checks if the valid_count has changed. For other parameters, it checks if the conflict_count has changed. Please verify that this conditional logic accurately reflects the expected behavior of the parameters and that all possible effects on the report summary are covered.

cvat/apps/quality_control/quality_reports.py (3)

1980-1987: ⚠️ Potential issue

Adjust indentation in if block.

There is incorrect indentation starting from line 1985, causing the code to execute incorrectly.

Apply this diff to fix the indentation:

 if self.settings.match_empty_frames and not gt_item.annotations and not ds_item.annotations:
     # Add virtual annotations for empty frames
     valid_labels_count = 1
     total_labels_count = 1

     valid_shapes_count = 1
     total_shapes_count = 1
     ds_shapes_count = 1
     gt_shapes_count = 1

Likely invalid or redundant comment.


982-984: ⚠️ Potential issue

Assign compare_line_orientation parameter correctly.

The compare_line_orientation parameter is hardcoded as False, which might override intended behavior.

Update the assignment to use the parameter passed to the constructor:

 def __init__(
     self,
     categories: dm.CategoriesInfo,
     *,
     included_ann_types: Optional[List[dm.AnnotationType]] = None,
     return_distances: bool = False,
     iou_threshold: float = 0.5,
     oks_sigma: float = 0.09,
     use_bbox_size_for_points: bool = True,
-    compare_line_orientation: bool = False,
+    compare_line_orientation: bool = compare_line_orientation,
     line_torso_radius: float = 0.01,
     panoptic_comparison: bool = False,
 ):
     super().__init__(iou_threshold=iou_threshold)
     self.categories = categories
     self._skeleton_info = {}
     self.included_ann_types = included_ann_types
     self.return_distances = return_distances

     self.oks_sigma = oks_sigma
     self.use_bbox_size_for_points = use_bbox_size_for_points
     self.compare_line_orientation = compare_line_orientation

Likely invalid or redundant comment.


Line range hint 2139-2168: Correct the frame share calculation.

The frame share is calculated using self._ds_data_provider.job_data.rel_range, which may not accurately represent the total number of frames.

Run the following script to verify that frame_share is calculated correctly across different scenarios:

cvat/apps/quality_control/models.py Outdated Show resolved Hide resolved
cvat-core/src/quality-settings.ts Outdated Show resolved Hide resolved
cvat/apps/quality_control/quality_reports.py Outdated Show resolved Hide resolved
Comment on lines +242 to +246
rows: np.ndarray
precision: np.ndarray
recall: np.ndarray
accuracy: np.ndarray
jaccard_index: Optional[np.ndarray]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Initialize optional field jaccard_index in ConfusionMatrix.

The jaccard_index field is optional but may cause issues if not properly initialized, especially when old serialized instances are used.

Consider setting a default value in the constructor:

 @define(kw_only=True)
 class ConfusionMatrix(_Serializable):
     labels: List[str]
     rows: np.ndarray
     precision: np.ndarray
     recall: np.ndarray
     accuracy: np.ndarray
-    jaccard_index: Optional[np.ndarray]
+    jaccard_index: Optional[np.ndarray] = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
rows: np.ndarray
precision: np.ndarray
recall: np.ndarray
accuracy: np.ndarray
jaccard_index: Optional[np.ndarray]
rows: np.ndarray
precision: np.ndarray
recall: np.ndarray
accuracy: np.ndarray
jaccard_index: Optional[np.ndarray] = None

cvat/apps/quality_control/quality_reports.py Show resolved Hide resolved
cvat/apps/quality_control/quality_reports.py Outdated Show resolved Hide resolved
@@ -1972,8 +1979,16 @@ def _find_closest_unmatched_shape(shape: dm.Annotation):
gt_label_idx = label_id_map[gt_ann.label] if gt_ann else self._UNMATCHED_IDX
confusion_matrix[ds_label_idx, gt_label_idx] += 1

if self.settings.match_empty_frames and not gt_item.annotations and not ds_item.annotations:
# Add virtual annotations for empty frames
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of these virtual annotations?

Copy link
Contributor Author

@zhiltsov-max zhiltsov-max Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically, to break math and code less and get the expected values from accuracy, precision, and recall here and in aggregated reports (job report, task report). For instance, if there are both empty and not empty frames, this helps to get correct metrics in an aggregated report. It's not totally nonsense, as an empty annotation can be considered a frame annotation by itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, but it seems like bad UX. You choose to match empty frames, and you inexplicably get fake totals. Moreover, if I understand the code correctly, with this change the totals no longer match the confusion matrix, which seems like it could cause more confusion and math errors down the line.

Is this solution really that much less disruptive than fudging the metric formulas to return 1 instead of 0?

It's not totally nonsense, as an empty annotation can be considered a frame annotation by itself.

This is a valid point, but you're not implementing that consistently. In your implementation, this "empty annotation" only appears when both GT and DS frames are empty. To do this consistently, you'd need to increase the GT count whenever the GT frame is empty, and the DS count whenever the DS frame is empty (and the valid count when both are).

You could also resolve the inconsistency between the confusion matrix and the totals by adding another row/column to the matrix specifically for these "empty annotations".

This would resolve the consistency issues, and slightly improve the UX issue, since you could see in the report where the extra annotation is coming from. But frankly, it still seems easier to me to just fudge the metrics.

@codecov-commenter
Copy link

codecov-commenter commented Nov 8, 2024

Codecov Report

Attention: Patch coverage is 25.45455% with 41 lines in your changes missing coverage. Please review.

Project coverage is 74.25%. Comparing base (a6fd1e5) to head (b42daa3).

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #8652   +/-   ##
========================================
  Coverage    74.25%   74.25%           
========================================
  Files          401      401           
  Lines        43465    43502   +37     
  Branches      3950     3950           
========================================
+ Hits         32273    32302   +29     
- Misses       11192    11200    +8     
Components Coverage Δ
cvat-ui 78.54% <20.00%> (+0.01%) ⬆️
cvat-server 70.58% <26.00%> (-0.01%) ⬇️

cvat/apps/quality_control/quality_reports.py Outdated Show resolved Hide resolved
cvat/apps/quality_control/quality_reports.py Show resolved Hide resolved
annotation_summary.valid_count += empty_frame_count
annotation_summary.total_count += empty_frame_count
annotation_summary.ds_count += empty_frame_count
annotation_summary.gt_count += empty_frame_count
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is really a useful thing to do here. Imagine a situation where there are 100 frames, frame 1 has 1 valid annotations and 2 total annotations, every other frame is empty.

In this case, with match_empty_frames off:

  • frame 1 has accuracy 50%, others have 0%.
  • total accuracy is 50%.

With match_empty_frames on:

  • frame 1 has accuracy 50%, others have 100%.
  • total accuracy is (1+99)/(2+99) = 99%.

Do you think this jump in total accuracy would be expected by the user?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to say about the expectations, to me the user just works in one of the modes - they either consider empty frames as annotated or not. But the second value seems to be more correct, so maybe something should be done for the case with the option disabled.

Copy link

sonarcloud bot commented Nov 8, 2024

@zhiltsov-max zhiltsov-max merged commit d315485 into develop Nov 11, 2024
34 checks passed
@zhiltsov-max zhiltsov-max deleted the zm/match-empty-frames branch November 11, 2024 12:20
@cvat-bot cvat-bot bot mentioned this pull request Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants