-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aadi/device-fix #25
Aadi/device-fix #25
Conversation
* use accelerator from config rather than hard coded
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## talmo/add-osx-arm64-env #25 +/- ##
===========================================================
+ Coverage 75.00% 75.39% +0.39%
===========================================================
Files 24 24
Lines 1532 1508 -24
===========================================================
- Hits 1149 1137 -12
+ Misses 383 371 -12 ☔ View full report in Codecov by Sentry. |
Important Auto Review SkippedAuto reviews are disabled on base/target branches other than the default branch. Please add the base/target branch pattern to the list of additional branches to be reviewed in the settings. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe changes across various files suggest a shift in how the codebase handles device assignments and initialization for PyTorch operations. There's a move away from explicit CUDA device checks and generator settings for data loading, towards a more streamlined approach that could be preparing for device-agnostic code or simplifying the configuration process. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (6)
- biogtr/config.py (1 hunks)
- biogtr/inference/post_processing.py (1 hunks)
- biogtr/models/embedding.py (2 hunks)
- biogtr/training/train.py (2 hunks)
- tests/fixtures/torch.py (1 hunks)
- tests/test_inference.py (1 hunks)
Additional comments: 9
biogtr/config.py (2)
174-184: The summary indicates that the
generator
variable and its parameter have been removed from theDataLoader
initialization, but the provided hunk does not show this change. Please verify if the summary is accurate or if the hunk is missing the relevant changes.174-184: The changes to the
get_dataloader
method, specifically the setting of thepin_memory
flag based on the number of workers and the use oftorch.multiprocessing.set_sharing_strategy("file_system")
, are good practices for optimizing data loading when using multiple workers. This should help prevent too many open files error and improve data transfer to the GPU when using pin_memory.biogtr/inference/post_processing.py (1)
- 148-154: The change to convert
id_inds
to the device ofvalid
before performing matrix multiplication is a good practice to avoid device mismatch errors, which aligns with the PR objective of addressing device-related issues.biogtr/models/embedding.py (2)
147-154: The changes in the
_learned_pos_embedding
function correctly ensure that tensors are moved to the appropriate device, which aligns with the PR's goal of dynamic device configuration.187-195: The changes in the
_learned_temp_embedding
function correctly ensure that tensors are moved to the appropriate device, which aligns with the PR's goal of dynamic device configuration.biogtr/training/train.py (3)
18-18: The device selection logic has been commented out, which aligns with the PR's objective to use configuration files for device settings. Ensure that the
Config
class or other parts of the code now handle device configuration as intended.27-27: The setting of the default device has been commented out. Verify that the new device management mechanism is correctly implemented and that the default device is set appropriately elsewhere in the code.
15-21: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [15-30]
The code now uses
accelerator
anddevices
variables to configure thetrainer
. Ensure that these variables are set based on the new configuration approach and that they are used consistently throughout the code.tests/fixtures/torch.py (1)
- 1-23: The changes in
tests/fixtures/torch.py
are consistent with the PR objective to address device errors and the summary provided. The entire file has been commented out to temporarily suspend the test setup related to device configuration due to a device error intest_training
. Ensure that the root cause of the device error is identified and resolved before re-enabling this fixture.
tests/test_inference.py
Outdated
#@pytest.mark.parametrize("set_default_device", ["cpu"], indirect=True) | ||
def test_post_processing(): #set_default_device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment #set_default_device
is potentially misleading since the set_default_device
parameter has been removed. Consider updating or removing this comment to avoid confusion.
- def test_post_processing(): #set_default_device
+ def test_post_processing():
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
#@pytest.mark.parametrize("set_default_device", ["cpu"], indirect=True) | |
def test_post_processing(): #set_default_device | |
#@pytest.mark.parametrize("set_default_device", ["cpu"], indirect=True) | |
def test_post_processing(): |
tests/test_inference.py
Outdated
@@ -68,8 +68,8 @@ | |||
assert len(instances_pred[test_frame]["pred_track_ids"] == num_detected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion is incorrectly formed; it should assert the equality of the length of pred_track_ids
to num_detected
.
- assert len(instances_pred[test_frame]["pred_track_ids"] == num_detected)
+ assert len(instances_pred[test_frame]["pred_track_ids"]) == num_detected
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
assert len(instances_pred[test_frame]["pred_track_ids"] == num_detected) | |
assert len(instances_pred[test_frame]["pred_track_ids"]) == num_detected |
Co-authored-by: aaprasad <aaprasad.ucsd.edu> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
Refactor
Bug Fixes
Tests
Chores