-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge release/v1.2.0
to develop
#5265
Conversation
WalkthroughThis pull request introduces a new utility function Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
app/packages/looker/src/lookers/utils.ts (1)
3-7
: Add input validation for parametersThe function looks clean and efficient. Consider adding input validation to handle edge cases.
export const hasFrame = (buffers: Buffers, frameNumber: number) => { + if (!Array.isArray(buffers) || !buffers.length) return false; + if (typeof frameNumber !== 'number' || isNaN(frameNumber)) return false; return buffers.some( ([start, end]) => start <= frameNumber && frameNumber <= end ); };app/packages/looker/src/lookers/utils.test.ts (1)
5-19
: Enhance test coverage with edge casesWhile the current tests cover basic scenarios, consider adding tests for edge cases.
describe("looker utilities", () => { it("determines frame availability given a buffer list", () => { const BUFFERS: Buffers = [ [1, 3], [5, 25], ]; for (const frameNumber of [1, 10, 25]) { expect(hasFrame(BUFFERS, frameNumber)).toBe(true); } for (const frameNumber of [0, 4, 26]) { expect(hasFrame(BUFFERS, frameNumber)).toBe(false); } }); + + it("handles edge cases", () => { + expect(hasFrame([], 1)).toBe(false); + expect(hasFrame([[1, 1]], 1)).toBe(true); + expect(hasFrame([[1, 3]], NaN)).toBe(false); + }); });app/packages/looker/src/lookers/video.ts (1)
Line range hint
26-399
: Consider architectural improvements for better maintainabilityThe
VideoLooker
class has grown complex with multiple responsibilities. Consider:
- Breaking down into smaller components (e.g., separate frame management, playback control)
- Adding TypeDoc comments to document complex logic
- Creating interfaces for the main responsibilities
This will improve maintainability and make the code easier to understand.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/packages/looker/src/lookers/utils.test.ts
(1 hunks)app/packages/looker/src/lookers/utils.ts
(1 hunks)app/packages/looker/src/lookers/video.ts
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
app/packages/looker/src/lookers/utils.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
app/packages/looker/src/lookers/utils.test.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
app/packages/looker/src/lookers/video.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
🔇 Additional comments (1)
app/packages/looker/src/lookers/video.ts (1)
398-398
: LGTM: Good refactoring to use the utility function
The change improves code maintainability by utilizing the extracted utility function.
Support on-disk instance segmentations in SDK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (7)
fiftyone/utils/labels.py (2)
233-234
: Simplify Nestedif
Statements forHeatmap
Combine the nested
if
statements into a single condition to improve readability.Refactor as:
-elif isinstance(label, fol.Heatmap): - if label.map is not None: +elif isinstance(label, fol.Heatmap) and label.map is not None:🧰 Tools
🪛 Ruff (0.8.2)
233-234: Use a single
if
statement instead of nestedif
statementsCombine
if
statements usingand
(SIM102)
309-310
: Simplify Nestedelif
andif
Statements forHeatmap
Combine the conditions into a single
elif
statement for better clarity.Refactor as:
-elif isinstance(label, fol.Heatmap): - if label.map_path is not None: +elif isinstance(label, fol.Heatmap) and label.map_path is not None:🧰 Tools
🪛 Ruff (0.8.2)
309-310: Use a single
if
statement instead of nestedif
statements(SIM102)
fiftyone/utils/data/exporters.py (2)
2047-2050
: Simplifyif
-else
Block Using Ternary OperatorConsider using a ternary operator to simplify the assignment of
_value
.Refactor as:
- if key is not None: - _value = _d.get(key, None) - else: - _value = _d + _value = _d.get(key, None) if key is not None else _d🧰 Tools
🪛 Ruff (0.8.2)
2047-2050: Use ternary operator
_value = _d.get(key, None) if key is not None else _d
instead ofif
-else
-blockReplace
if
-else
-block with_value = _d.get(key, None) if key is not None else _d
(SIM108)
2355-2358
: Simplifyif
-else
Block Using Ternary OperatorYou can apply the same simplification here to make the code more concise.
Refactor as:
- if key is not None: - _value = _d.get(key, None) - else: - _value = _d + _value = _d.get(key, None) if key is not None else _d🧰 Tools
🪛 Ruff (0.8.2)
2355-2358: Use ternary operator
_value = _d.get(key, None) if key is not None else _d
instead ofif
-else
-blockReplace
if
-else
-block with_value = _d.get(key, None) if key is not None else _d
(SIM108)
fiftyone/utils/data/importers.py (1)
2168-2169
: Simplify Nestedif
Statements Usingand
OperatorYou can combine the nested
if
statements into a single condition to enhance code clarity.Refactor as:
- if not os.path.isabs(value): - pydash.set_(sd, field_name, os.path.join(rel_dir, value)) + if value and not os.path.isabs(value): + pydash.set_(sd, field_name, os.path.join(rel_dir, value))🧰 Tools
🪛 Ruff (0.8.2)
2168-2169: Use a single
if
statement instead of nestedif
statementsCombine
if
statements usingand
(SIM102)
tests/unittests/import_export_tests.py (1)
2221-2229
: LGTM! Consider adding docstrings to the test methods.The new test methods for instance segmentation are well structured and comprehensive. They test both standard and legacy FiftyOne dataset types. Consider adding docstrings to document the test methods' purpose and test cases.
fiftyone/core/collections.py (1)
Line range hint
10665-10710
: Consider optimizing the blacklist handling logicThe blacklist filtering implementation can be simplified using a ternary operator:
- if etau.is_container(blacklist): - blacklist = set(blacklist) - else: - blacklist = {blacklist} + blacklist = set(blacklist) if etau.is_container(blacklist) else {blacklist}Otherwise, the changes to add blacklist filtering capability look good.
🧰 Tools
🪛 Ruff (0.8.2)
10702-10705: Use ternary operator
blacklist = set(blacklist) if etau.is_container(blacklist) else {blacklist}
instead ofif
-else
-blockReplace
if
-else
-block withblacklist = set(blacklist) if etau.is_container(blacklist) else {blacklist}
(SIM108)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
docs/source/user_guide/using_datasets.rst
(3 hunks)fiftyone/constants.py
(1 hunks)fiftyone/core/collections.py
(4 hunks)fiftyone/core/labels.py
(2 hunks)fiftyone/utils/data/exporters.py
(5 hunks)fiftyone/utils/data/importers.py
(2 hunks)fiftyone/utils/labels.py
(10 hunks)setup.py
(1 hunks)tests/unittests/import_export_tests.py
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
fiftyone/utils/data/importers.py
2168-2169: Use a single if
statement instead of nested if
statements
Combine if
statements using and
(SIM102)
fiftyone/core/collections.py
10702-10705: Use ternary operator blacklist = set(blacklist) if etau.is_container(blacklist) else {blacklist}
instead of if
-else
-block
Replace if
-else
-block with blacklist = set(blacklist) if etau.is_container(blacklist) else {blacklist}
(SIM108)
fiftyone/utils/data/exporters.py
2047-2050: Use ternary operator _value = _d.get(key, None) if key is not None else _d
instead of if
-else
-block
Replace if
-else
-block with _value = _d.get(key, None) if key is not None else _d
(SIM108)
2355-2358: Use ternary operator _value = _d.get(key, None) if key is not None else _d
instead of if
-else
-block
Replace if
-else
-block with _value = _d.get(key, None) if key is not None else _d
(SIM108)
fiftyone/utils/labels.py
214-225: Combine if
branches using logical or
operator
Combine if
branches
(SIM114)
233-234: Use a single if
statement instead of nested if
statements
Combine if
statements using and
(SIM102)
288-299: Combine if
branches using logical or
operator
Combine if
branches
(SIM114)
309-310: Use a single if
statement instead of nested if
statements
(SIM102)
🔇 Additional comments (15)
fiftyone/constants.py (1)
45-45
: Verify version compatibility implications
The compatibility range has been extended to support versions up to 1.4. Please ensure that:
- This change follows semantic versioning principles
- All features in the upcoming 1.3.x releases will maintain backward compatibility
- The documentation is updated to reflect this compatibility change
✅ Verification successful
Let me gather more information about version-related changes and documentation.
Version compatibility change is aligned with releases and usage
The version compatibility extension to 1.4 is justified based on the evidence:
- Release notes show FiftyOne Teams versions up to 1.4.5 are already released
- The compatibility range is used consistently in migrations and CLI for version checks
- A previous reversion in 1.3.1 shows careful consideration of backward compatibility
The change follows semantic versioning principles and aligns with existing releases. No issues found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if documentation reflects the new version compatibility
rg -g "*.md" -g "*.rst" "compatible.*version" .
Length of output: 844
Script:
#!/bin/bash
# Check setup.py for version information
rg "version|COMPATIBLE" setup.py
# Check for any version-related changes in documentation
rg "1\.[34]" docs/source/release-notes.rst
# Look for version compatibility checks in the codebase
ast-grep --pattern 'COMPATIBLE_VERSIONS'
Length of output: 1998
setup.py (1)
Line range hint 19-27
: LGTM: Version validation logic
The get_version() function correctly validates that any RELEASE_VERSION environment variable matches the package version.
fiftyone/utils/labels.py (2)
158-169
: Docstring Update for export_segmentations
The function docstring now correctly includes Detection
and Detections
as supported label types, reflecting the expanded functionality.
188-190
: Validation Now Includes Detection
and Detections
The validate_collection_label_fields
call now includes fol.Detection
and fol.Detections
in the tuple of supported types, ensuring proper validation of the in_field
.
fiftyone/utils/data/exporters.py (3)
15-17
: Add pydash
Import Statement
The import of pydash
is added to support enhanced data manipulation with safer access to nested attributes.
1897-1900
: Update log_collection
Method to Use blacklist
Parameter
The log_collection
method now uses the blacklist
parameter instead of include_filepath
, aligning with the updates made in fiftyone/core/collections.py
. This change enhances flexibility in media field filtering.
2034-2065
: Use pydash
for Safer Access to Nested Attributes
The refactored code uses pydash.get
and pydash.set_
for accessing and setting values in nested dictionaries, improving robustness when dealing with complex data structures.
🧰 Tools
🪛 Ruff (0.8.2)
2047-2050: Use ternary operator _value = _d.get(key, None) if key is not None else _d
instead of if
-else
-block
Replace if
-else
-block with _value = _d.get(key, None) if key is not None else _d
(SIM108)
fiftyone/core/labels.py (2)
412-413
: Update mask_path
Documentation in Detection
Class
The documentation now specifies that mask_path
should point to a single-channel PNG image where non-zero values represent the instance's extent. This provides clearer guidance to users.
536-537
: Update Error Message in to_segmentation
Method
The error message now correctly indicates that either the mask
or mask_path
attribute must be populated to convert to a segmentation, enhancing the clarity of the exception.
fiftyone/utils/data/importers.py (2)
17-17
: Add pydash
Import Statement
The import of pydash
is added to utilize its methods for handling nested dictionary structures, which enhances code safety and readability.
2155-2191
: Refactor _parse_media_fields
to Use pydash
The _parse_media_fields
function now uses pydash.get
and pydash.set_
for accessing and setting values in dictionaries. This change improves the handling of nested media fields by providing safer and more robust dictionary operations.
🧰 Tools
🪛 Ruff (0.8.2)
2168-2169: Use a single if
statement instead of nested if
statements
Combine if
statements using and
(SIM102)
tests/unittests/import_export_tests.py (1)
2233-2329
: LGTM! Comprehensive test implementation.
The helper method thoroughly tests all aspects of instance segmentation functionality including:
- In-database instance segmentations
- Conversion to on-disk format
- On-disk instance segmentations
- Media export options with various configurations
The assertions effectively verify the expected behavior in each case.
fiftyone/core/collections.py (1)
Line range hint 10715-10740
: LGTM! Well-structured media field parsing logic
The new _parse_media_field
method provides a robust way to parse and validate media field paths. The method properly handles:
- App media fields
- Frame fields
- List fields
- Field resolution and validation
docs/source/user_guide/using_datasets.rst (2)
Line range hint 2545-2611
: LGTM! Well-documented instance segmentation mask storage options.
The documentation clearly explains both approaches for storing instance segmentation masks (in-database or on-disk) and provides helpful implementation details about mask formats and requirements.
2620-2621
: LGTM! Good cross-reference to custom attributes.
The addition maintains documentation consistency by referencing the label customization capabilities.
@@ -13,7 +13,7 @@ | |||
from setuptools import setup, find_packages | |||
|
|||
|
|||
VERSION = "1.2.0" | |||
VERSION = "1.3.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Version 1.3.0 update needs additional documentation
The version bump to 1.3.0 is reflected in setup.py, but there are missing elements that need attention:
- No CHANGELOG.md or release notes file was found in the root directory to document changes for version 1.3.0
- The version is not consistently reflected in configuration files (setup.cfg, pyproject.toml)
- While version 1.3.0 is referenced in docs/source/release-notes.rst for Teams, the main product's release notes need to be updated
Please:
- Create a CHANGELOG.md or update release-notes.rst to document changes in version 1.3.0
- Update version in all relevant configuration files
- Document any breaking changes or dependency updates
🔗 Analysis chain
Verify version consistency across the project
The version bump from 1.2.0 to 1.3.0 indicates a minor release. Please ensure that:
- All version references are updated consistently
- Release notes document the changes in 1.3.0
- All internal package dependencies support 1.3.0
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any remaining references to version 1.2.0
rg "1\.2\.0" .
# Check version references in package files
echo "Checking package.json files..."
fd package.json --exec cat {} \; | grep "version"
# Check version in documentation
echo "Checking version in docs..."
fd -e md -e rst --exec grep -l "version" {} \;
Length of output: 14183
Script:
#!/bin/bash
# Check for version references in Python files and configuration files
echo "Checking Python files for version references..."
rg "1\.3\.0|version.*1\.3\.0" .
# Check for changelog/release notes
echo "Checking for changelog or release notes..."
fd -e md -e rst CHANGELOG -e md -e rst RELEASE
fd -e md -e rst changelog -e md -e rst release
# Check setup.cfg and pyproject.toml
echo "Checking setup.cfg and pyproject.toml..."
fd "setup.cfg|pyproject.toml" --exec cat {} \;
Length of output: 5586
if isinstance(label, fol.Segmentation): | ||
if label.mask_path is not None: | ||
del_path = label.mask_path if delete_images else None | ||
label.import_mask(update=update) | ||
if del_path: | ||
etau.delete_file(del_path) | ||
else: | ||
elif isinstance(label, fol.Detection): | ||
if label.mask_path is not None: | ||
del_path = label.mask_path if delete_images else None | ||
label.import_mask(update=update) | ||
if del_path: | ||
etau.delete_file(del_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Redundant if
Branches for Importing Masks
Similar redundant if
statements are present when importing masks for fol.Segmentation
and fol.Detection
. Consider combining them to reduce code duplication.
Refactor as:
-if isinstance(label, fol.Segmentation):
- if label.mask_path is not None:
- del_path = label.mask_path if delete_images else None
- label.import_mask(update=update)
- if del_path:
- etau.delete_file(del_path)
-elif isinstance(label, fol.Detection):
- if label.mask_path is not None:
- del_path = label.mask_path if delete_images else None
- label.import_mask(update=update)
- if del_path:
- etau.delete_file(del_path)
+if isinstance(label, (fol.Segmentation, fol.Detection)) and label.mask_path is not None:
+ del_path = label.mask_path if delete_images else None
+ label.import_mask(update=update)
+ if del_path:
+ etau.delete_file(del_path)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if isinstance(label, fol.Segmentation): | |
if label.mask_path is not None: | |
del_path = label.mask_path if delete_images else None | |
label.import_mask(update=update) | |
if del_path: | |
etau.delete_file(del_path) | |
else: | |
elif isinstance(label, fol.Detection): | |
if label.mask_path is not None: | |
del_path = label.mask_path if delete_images else None | |
label.import_mask(update=update) | |
if del_path: | |
etau.delete_file(del_path) | |
if isinstance(label, (fol.Segmentation, fol.Detection)) and label.mask_path is not None: | |
del_path = label.mask_path if delete_images else None | |
label.import_mask(update=update) | |
if del_path: | |
etau.delete_file(del_path) |
🧰 Tools
🪛 Ruff (0.8.2)
288-299: Combine if
branches using logical or
operator
Combine if
branches
(SIM114)
if isinstance(label, fol.Segmentation): | ||
if label.mask is not None: | ||
outpath = filename_maker.get_output_path( | ||
image.filepath, output_ext=".png" | ||
) | ||
label.export_mask(outpath, update=update) | ||
elif isinstance(label, fol.Detection): | ||
if label.mask is not None: | ||
outpath = filename_maker.get_output_path( | ||
image.filepath, output_ext=".png" | ||
) | ||
label.export_mask(outpath, update=update) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Redundant if
Branches for Exporting Masks
The code repeats similar logic for exporting masks for fol.Segmentation
, fol.Detection
, and fol.Detections
. Consider combining the if
statements to reduce redundancy and improve readability.
You can refactor the code as follows:
-if isinstance(label, fol.Segmentation):
- if label.mask is not None:
- outpath = filename_maker.get_output_path(
- image.filepath, output_ext=".png"
- )
- label.export_mask(outpath, update=update)
-elif isinstance(label, fol.Detection):
- if label.mask is not None:
- outpath = filename_maker.get_output_path(
- image.filepath, output_ext=".png"
- )
- label.export_mask(outpath, update=update)
-elif isinstance(label, fol.Detections):
- for detection in label.detections:
- if detection.mask is not None:
- outpath = filename_maker.get_output_path(
- image.filepath, output_ext=".png"
- )
- detection.export_mask(outpath, update=update)
+if isinstance(label, (fol.Segmentation, fol.Detection)) and label.mask is not None:
+ outpath = filename_maker.get_output_path(
+ image.filepath, output_ext=".png"
+ )
+ label.export_mask(outpath, update=update)
+elif isinstance(label, fol.Detections):
+ for detection in label.detections:
+ if detection.mask is not None:
+ outpath = filename_maker.get_output_path(
+ image.filepath, output_ext=".png"
+ )
+ detection.export_mask(outpath, update=update)
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.8.2)
214-225: Combine if
branches using logical or
operator
Combine if
branches
(SIM114)
* sort * also here --------- Co-authored-by: Justin Newberry <justin.newberry@srd.subaru.co.jp>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
.github/workflows/test.yml (2)
Line range hint
38-44
: Document Windows exclusion rationaleThe workflow excludes all Windows testing for Python 3.9, 3.10, and 3.11. Consider adding a comment explaining why these combinations are excluded to help future maintainers understand the reasoning.
exclude: + # Windows tests are excluded for these Python versions because... - os: windows-latest python: "3.9" - os: windows-latest python: "3.10" - os: windows-latest python: "3.11"
Line range hint
108-112
: Enhance isolated tests error handlingWhile running isolated tests even after failures is good practice, consider enhancing the error reporting and test isolation:
- Add timeout to prevent hanging tests
- Collect and aggregate test results for better reporting
- name: Run isolated tests if: success() || failure() run: | + # Set timeout and collect results + export PYTEST_TIMEOUT=300 find tests/isolated/ -name '*.py' -print0 | xargs -0 --verbose -n1 python tests/utils/pytest_wrapper.py --verbose + | tee isolated_tests.log
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/workflows/test.yml
(1 hunks)fiftyone/__public__.py
(1 hunks)fiftyone/core/stages.py
(1 hunks)package/db/setup.py
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- fiftyone/core/stages.py
🧰 Additional context used
🪛 Ruff (0.8.2)
fiftyone/__public__.py
226-226: .core.stages.Shuffle
imported but unused
Remove unused import
(F401)
🔇 Additional comments (4)
fiftyone/__public__.py (1)
226-226
: LGTM! Verify API documentation updates
The re-addition of Shuffle
to the public interface maintains API consistency with its reclassification in the core stages.
Let's verify if the API documentation needs updates:
✅ Verification successful
LGTM! No API documentation updates needed
The re-addition of Shuffle
to the public interface is consistent with its implementation in core/stages.py
as a core view stage. The class is properly documented with docstrings and examples, and its usage is demonstrated in unit tests. No additional API documentation updates are required.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if API documentation needs updates for Shuffle stage
# Look for Shuffle documentation in docs
rg -A 5 "Shuffle.*stage" docs/
# Check if there are any examples using Shuffle
rg "\.shuffle\(" examples/
Length of output: 111
Script:
#!/bin/bash
# Let's try a broader search for Shuffle documentation and usage
# Search for Shuffle in all markdown files
rg -t md "Shuffle" .
# Search for shuffle-related code in Python files
rg "shuffle" -g "*.py" -A 3
# Look for Shuffle class definition to understand its documentation
ast-grep --pattern 'class Shuffle'
Length of output: 66070
Script:
#!/bin/bash
# Let's look for the Shuffle class definition and its documentation
ast-grep --pattern 'class Shuffle {
$$$
}'
# Search for Shuffle in Python files with more context
rg "class Shuffle" -g "*.py" -B 2 -A 10
Length of output: 631
🧰 Tools
🪛 Ruff (0.8.2)
226-226: .core.stages.Shuffle
imported but unused
Remove unused import
(F401)
package/db/setup.py (1)
127-128
: Verify MongoDB 8.0.4 compatibility and Ubuntu 24.04 support
The URLs have been updated to use MongoDB 8.0.4 with Ubuntu 24.04 (ubuntu2404). Please verify:
- MongoDB 8.0.4 compatibility with the current codebase
- Official support status for Ubuntu 24.04 (Noble Numbat)
✅ Verification successful
Let me check the specific version requirements and compatibility checks in those files.
Let me check the minimum MongoDB version requirement and any other version-specific configurations.
MongoDB 8.0.4 and Ubuntu 24.04 compatibility verified
The changes are compatible with the codebase for the following reasons:
- The minimum required MongoDB version is 4.4, and MongoDB 8.0.4 exceeds this requirement
- Ubuntu 24.04 support has been officially added as noted in the release notes
- The version upgrade follows the existing pattern in the codebase for platform-specific MongoDB distributions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any MongoDB version-specific code or dependencies
rg -l "mongodb.*version|mongo.*version"
# Look for Ubuntu version checks or platform-specific code
rg -l "ubuntu.*24|noble|24\.04"
Length of output: 230
Script:
#!/bin/bash
# Check MongoDB version requirements and compatibility checks
rg -A 5 -B 5 "mongodb.*version|mongo.*version" install.bash package/db/tests/test_db.py fiftyone/core/odm/database.py
# Check Ubuntu 24.04 related changes and context
rg -A 5 -B 5 "ubuntu.*24|noble|24\.04" docs/source/release-notes.rst package/db/setup.py
# Check if there are any MongoDB version constraints in requirements
fd "requirements" -t f -x cat {}
Length of output: 4134
Script:
#!/bin/bash
# Check for minimum MongoDB version requirement
rg "MIN_MONGODB_VERSION" -A 2 -B 2
# Check requirements files content for MongoDB version constraints
cat requirements/prod.txt
# Look for any MongoDB configuration or version settings
rg "MONGODB.*VERSION" -A 2 -B 2
Length of output: 3546
.github/workflows/test.yml (2)
7-7
: Verify compatibility with ubuntu-latest
The switch from ubuntu-20.04
to ubuntu-latest
ensures access to the latest security updates and package versions. However, please verify that all app dependencies are compatible with the latest Ubuntu version to prevent potential CI issues.
✅ Verification successful
Ubuntu version change is consistent with repository patterns
The change to ubuntu-latest
in the test workflow aligns with the repository's existing practices. Most workflows already use ubuntu-latest
, including critical jobs like PR checks, builds, and deployments. Only specific jobs that may have version-sensitive dependencies (like documentation builds and GraphQL-related tasks) explicitly use ubuntu-20.04
. The test job doesn't appear to have such constraints, making this a safe improvement.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any OS-specific dependencies or version constraints
rg -l "ubuntu-20.04|ubuntu-latest" .github/workflows/
Length of output: 357
Script:
#!/bin/bash
# Check the current Ubuntu version usage across workflow files
rg "runs-on:\s*(ubuntu-20\.04|ubuntu-latest)" .github/workflows/ -A 2
Length of output: 3101
Line range hint 31-32
: Verify if 'ubuntu-latest-m' is correct
The matrix includes ubuntu-latest-m
which seems unusual. Should this be ubuntu-latest
? The -m
suffix is not a standard GitHub Actions runner identifier.
@@ -175,7 +175,7 @@ def _get_download(): | |||
MONGODB_BINARIES = ["mongod"] | |||
|
|||
|
|||
VERSION = "1.1.7" | |||
VERSION = "1.2.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the gathered information, I can now provide the final response:
Version update to 1.2.0 is inconsistent with other package versions
The version update to "1.2.0" in package/db/setup.py appears to be inconsistent with other package versions in the codebase:
- setup.py has version "1.3.0"
- package/desktop/setup.py has version "0.35.0"
- package/graphql/setup.py has version "0.1.0"
This inconsistency could lead to versioning issues across the packages.
🔗 Analysis chain
Version update aligns with release branch
The version update to "1.2.0" correctly matches the release branch being merged.
Let me gather more information about version references in the codebase.
Let me search for any version-related files in the repository to ensure version consistency.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify version consistency across the project
rg -g '!package/db/setup.py' '"1.2.0"' || echo "No other occurrences found"
Length of output: 4959
Script:
#!/bin/bash
# Check for version references in setup.py files and package metadata
fd "setup.py|pyproject.toml|__init__.py" --exec cat {} \; | rg -i "version|release"
# Check for any version-related environment variables in the codebase
rg "RELEASE_VERSION|VERSION" -A 2 -B 2
Length of output: 74033
Script:
#!/bin/bash
# Check for version files and changelog
fd "version|VERSION|CHANGELOG" --type f --exec head -n 20 {} \;
# Check for package version in pyproject.toml or setup.cfg
fd "pyproject.toml|setup.cfg" --exec cat {} \;
Length of output: 1977
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix grayscale segmentation regression + RGB masks recoloring issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
app/packages/looker/src/worker/canvas-decoder.test.ts (1)
25-32
: Consider adding more edge cases for non-grayscale detection.While the current test case is valid, consider adding tests for:
- Maximum RGB values (255, 255, 255)
- Edge cases like (1, 0, 0) or (0, 0, 1)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/packages/looker/src/worker/canvas-decoder.test.ts
(1 hunks)app/packages/looker/src/worker/canvas-decoder.ts
(2 hunks)app/packages/looker/src/worker/painter.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
app/packages/looker/src/worker/painter.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
app/packages/looker/src/worker/canvas-decoder.test.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
app/packages/looker/src/worker/canvas-decoder.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
🔇 Additional comments (6)
app/packages/looker/src/worker/canvas-decoder.test.ts (3)
4-8
: LGTM! Well-structured test helper function.
The createData
utility function is well-implemented for converting pixel arrays to Uint8ClampedArray
for testing purposes.
10-23
: LGTM! Comprehensive test cases for grayscale detection.
The test cases effectively validate:
- Perfect grayscale detection
- Alpha channel validation
34-42
: LGTM! Well-designed performance test case.
The test effectively validates the sampling behavior with a large dataset, ensuring the minimum sampling requirement (1%) is met.
app/packages/looker/src/worker/canvas-decoder.ts (2)
3-22
: LGTM! Well-implemented grayscale detection with performance optimization.
The implementation efficiently samples pixels to determine if an image is grayscale, with clear documentation and proper performance considerations.
53-56
: LGTM! Clear and type-safe return statement.
The return statement properly includes all necessary information about the image data.
app/packages/looker/src/worker/painter.ts (1)
281-288
: 🛠️ Refactor suggestion
Enhance the guard clause robustness.
While the guard clause correctly handles RGBA masks, consider adding additional checks:
if (
maskData.channels === 4 &&
- targets.length === label.mask.image.byteLength
+ targets.length === label.mask.image.byteLength &&
+ targets.length % 4 === 0 && // Ensure proper RGBA alignment
+ label.mask.image.byteLength > 0 // Ensure valid image data
)
// for nongrayscale images, channel is guaranteed to be 4 (RGBA) | ||
const channels = isGrayscale(imageData.data) ? 1 : 4; | ||
|
||
// dispose | ||
imageBitmap.close(); | ||
if (channels === 1) { | ||
// get rid of the G, B, and A channels, new buffer will be 1/4 the size | ||
const data = new Uint8ClampedArray(width * height); | ||
for (let i = 0; i < data.length; i++) { | ||
data[i] = imageData.data[i * 4]; | ||
} | ||
imageData.data.set(data); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for buffer manipulation.
While the optimization for grayscale images is good, consider adding error handling:
if (channels === 1) {
+ try {
const data = new Uint8ClampedArray(width * height);
for (let i = 0; i < data.length; i++) {
data[i] = imageData.data[i * 4];
}
imageData.data.set(data);
+ } catch (error) {
+ console.error('Failed to optimize grayscale image:', error);
+ // Fallback to original RGBA data
+ channels = 4;
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// for nongrayscale images, channel is guaranteed to be 4 (RGBA) | |
const channels = isGrayscale(imageData.data) ? 1 : 4; | |
// dispose | |
imageBitmap.close(); | |
if (channels === 1) { | |
// get rid of the G, B, and A channels, new buffer will be 1/4 the size | |
const data = new Uint8ClampedArray(width * height); | |
for (let i = 0; i < data.length; i++) { | |
data[i] = imageData.data[i * 4]; | |
} | |
imageData.data.set(data); | |
} | |
// for nongrayscale images, channel is guaranteed to be 4 (RGBA) | |
const channels = isGrayscale(imageData.data) ? 1 : 4; | |
if (channels === 1) { | |
try { | |
// get rid of the G, B, and A channels, new buffer will be 1/4 the size | |
const data = new Uint8ClampedArray(width * height); | |
for (let i = 0; i < data.length; i++) { | |
data[i] = imageData.data[i * 4]; | |
} | |
imageData.data.set(data); | |
} catch (error) { | |
console.error('Failed to optimize grayscale image:', error); | |
// Fallback to original RGBA data | |
channels = 4; | |
} | |
} |
Merge
release/v1.2.0
todevelop
Summary by CodeRabbit
hasFrame
for checking frame availability.isGrayscale
for detecting grayscale images.Segmentation
method.