Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add osx-arm64 env #13

Merged
merged 9 commits into from
Apr 22, 2024
Merged

Add osx-arm64 env #13

merged 9 commits into from
Apr 22, 2024

Conversation

talmo
Copy link
Contributor

@talmo talmo commented Jun 24, 2023

  • Move all pip dependencies to pyproject.toml
  • Add some missing fields on pyproject.toml
  • Update conda environment files to prefer conda packages and use pyproject.toml dependencies for pip ones
  • Add osx-arm64 conda environment for M1/M2 Macs

Summary by CodeRabbit

  • New Features

    • Introduced new data structures for better data handling and representation in tracking applications.
    • Enhanced dataset functionalities with new parameters and improved instance retrieval.
    • Added new functionalities in the inference module for better tracking and evaluation.
    • Updated model functionalities to support new data structures and tracking configurations.
    • Added new environment configurations for better dependency management.
  • Bug Fixes

    • Fixed issues related to data handling and processing across various modules.
  • Documentation

    • Updated docstrings for clarity and consistency across multiple files.
  • Refactor

    • Major refactoring in data handling, model processing, and configuration management to align with new features.
  • Tests

    • Updated tests to align with changes in data structures and functionalities.
  • Chores

    • Updated CI workflows, .gitignore, and project configurations to enhance development workflow and project management.

@codecov
Copy link

codecov bot commented Jun 24, 2023

Codecov Report

Attention: Patch coverage is 54.54545% with 15 lines in your changes are missing coverage. Please review.

Project coverage is 75.00%. Comparing base (e30a6b5) to head (b69f4dd).

❗ Current head b69f4dd differs from pull request most recent head 707a1a3. Consider uploading reports for the commit 707a1a3 to get more accurate results

Files Patch % Lines
biogtr/training/train.py 20.00% 8 Missing ⚠️
biogtr/config.py 37.50% 5 Missing ⚠️
biogtr/models/model_utils.py 71.42% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #13      +/-   ##
==========================================
- Coverage   75.28%   75.00%   -0.29%     
==========================================
  Files          24       24              
  Lines        1513     1532      +19     
==========================================
+ Hits         1139     1149      +10     
- Misses        374      383       +9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@sheridana
Copy link

Got it working on mps/cpu, but now device errors on local ubuntu tests with cuda. I'll need to debug before merging.

Co-authored-by: aaprasad <aaprasad.ucsd.edu>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

coderabbitai bot commented Apr 22, 2024

Warning

Rate Limit Exceeded

@aaprasad has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 34 seconds before requesting another review.

How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.
Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.
Please see our FAQ for further information.

Commits Files that changed from the base of the PR and between a6220b5 and 707a1a3.

Walkthrough

This update involves major enhancements across the biogtr project, focusing on data handling, model functionality, and reproducibility. Key changes include the introduction of new data structures (Frame and Instance), modifications to dataset classes for advanced chunking and seeding, and updates to the inference system to handle these structures. Workflow, environment configurations, and visualization scripts have also been refined to support these foundational changes.

Changes

File Path Change Summary
.github/workflows/ci.yml, .gitignore CI workflow streamlined; notebooks directory ignored.
biogtr/config.py, biogtr/inference/... Enhanced model loading, logging, and inference handling; added error checks and directory setup.
biogtr/data_structures.py, biogtr/datasets/... Introduced Frame and Instance data structures; datasets restructured for advanced handling.
biogtr/models/..., biogtr/training/... Model and training scripts updated for new data structures and improved reproducibility.
environment.yml, environment_cpu.yml, ... Updated environment configurations for better dependency management.
tests/... Test suites expanded and refactored to align with new data structures and functionalities.

🐰✨
In the land of code and byte,
A rabbit hopped through the night.
With a flick of its ear, a tweak here and there,
New paths it paved, with utmost care.
Frames and instances, now dance in line,
In the grand ballet of the algorithm's design.
🌟🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Actionable comments outside the diff hunks (1)
tests/test_training.py (1)

Line range hint 45-45: The local variable feats is assigned but never used. Consider removing it if it's not needed.

biogtr/inference/metrics.py Outdated Show resolved Hide resolved
Comment on lines +112 to +142
def pose_bbox(points: np.ndarray, bbox_size: Union[tuple[int], int]) -> torch.Tensor:
"""Calculate bbox around instance pose.

Args:
instance: a labeled instance in a frame,
padding: the amount to pad around the pose crop
im_shape: the size of the original image in (w,h)
bbox_size: size of bbox either an int indicating square bbox or in (x,y)

Returns:
Bounding box in [y1, x1, y2, x2] format.
"""
w, h = im_shape
if isinstance(bbox_size, int):
bbox_size = (bbox_size, bbox_size)
# print(points)
minx = np.nanmin(points[:, 0], axis=-1)
miny = np.nanmin(points[:, -1], axis=-1)
minpoints = np.array([minx, miny]).T

points = torch.Tensor([[p.x, p.y] for p in instance.points])
maxx = np.nanmax(points[:, 0], axis=-1)
maxy = np.nanmax(points[:, -1], axis=-1)
maxpoints = np.array([maxx, maxy]).T

min_x = max(torch.nanmin(points[:, 0]) - padding, 0)
min_y = max(torch.nanmin(points[:, 1]) - padding, 0)
max_x = min(torch.nanmax(points[:, 0]) + padding, w)
max_y = min(torch.nanmax(points[:, 1]) + padding, h)
c = (minpoints + maxpoints) / 2

bbox = torch.Tensor([min_y, min_x, max_y, max_x])
bbox = torch.Tensor(
[
c[-1] - bbox_size[-1] / 2,
c[0] - bbox_size[0] / 2,
c[-1] + bbox_size[-1] / 2,
c[0] + bbox_size[0] / 2,
]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor the pose_bbox function to improve readability and maintainability.

-    if isinstance(bbox_size, int):
-        bbox_size = (bbox_size, bbox_size)
-    minx = np.nanmin(points[:, 0], axis=-1)
-    miny = np.nanmin(points[:, -1], axis=-1)
-    minpoints = np.array([minx, miny]).T
-
-    maxx = np.nanmax(points[:, 0], axis=-1)
-    maxy = np.nanmax(points[:, -1], axis=-1)
-    maxpoints = np.array([maxx, maxy]).T
-
-    c = (minpoints + maxpoints) / 2
-
-    bbox = torch.Tensor(
-        [
-            c[-1] - bbox_size[-1] / 2,
-            c[0] - bbox_size[0] / 2,
-            c[-1] + bbox_size[-1] / 2,
-            c[0] + bbox_size[0] / 2,
-        ]
-    )
+    half_height, half_width = (bbox_size if isinstance(bbox_size, tuple) else (bbox_size, bbox_size))
+    min_coords = np.nanmin(points, axis=0)
+    max_coords = np.nanmax(points, axis=0)
+    center = (min_coords + max_coords) / 2
+    y1, x1 = center[1] - half_height // 2, center[0] - half_width // 2
+    y2, x2 = center[1] + half_height // 2, center[0] + half_width // 2
+    bbox = torch.tensor([y1, x1, y2, x2])

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def pose_bbox(points: np.ndarray, bbox_size: Union[tuple[int], int]) -> torch.Tensor:
"""Calculate bbox around instance pose.
Args:
instance: a labeled instance in a frame,
padding: the amount to pad around the pose crop
im_shape: the size of the original image in (w,h)
bbox_size: size of bbox either an int indicating square bbox or in (x,y)
Returns:
Bounding box in [y1, x1, y2, x2] format.
"""
w, h = im_shape
if isinstance(bbox_size, int):
bbox_size = (bbox_size, bbox_size)
# print(points)
minx = np.nanmin(points[:, 0], axis=-1)
miny = np.nanmin(points[:, -1], axis=-1)
minpoints = np.array([minx, miny]).T
points = torch.Tensor([[p.x, p.y] for p in instance.points])
maxx = np.nanmax(points[:, 0], axis=-1)
maxy = np.nanmax(points[:, -1], axis=-1)
maxpoints = np.array([maxx, maxy]).T
min_x = max(torch.nanmin(points[:, 0]) - padding, 0)
min_y = max(torch.nanmin(points[:, 1]) - padding, 0)
max_x = min(torch.nanmax(points[:, 0]) + padding, w)
max_y = min(torch.nanmax(points[:, 1]) + padding, h)
c = (minpoints + maxpoints) / 2
bbox = torch.Tensor([min_y, min_x, max_y, max_x])
bbox = torch.Tensor(
[
c[-1] - bbox_size[-1] / 2,
c[0] - bbox_size[0] / 2,
c[-1] + bbox_size[-1] / 2,
c[0] + bbox_size[0] / 2,
]
)
def pose_bbox(points: np.ndarray, bbox_size: Union[tuple[int], int]) -> torch.Tensor:
"""Calculate bbox around instance pose.
Args:
instance: a labeled instance in a frame,
bbox_size: size of bbox either an int indicating square bbox or in (x,y)
Returns:
Bounding box in [y1, x1, y2, x2] format.
"""
half_height, half_width = (bbox_size if isinstance(bbox_size, tuple) else (bbox_size, bbox_size))
min_coords = np.nanmin(points, axis=0)
max_coords = np.nanmax(points, axis=0)
center = (min_coords + max_coords) / 2
y1, x1 = center[1] - half_height // 2, center[0] - half_width // 2
y2, x2 = center[1] + half_height // 2, center[0] + half_width // 2
bbox = torch.tensor([y1, x1, y2, x2])

Comment on lines +178 to +184
try:
reid_features = torch.cat(
[frame.get_features() for frame in frames], dim=0
).unsqueeze(0)
except Exception as e:
print([[f.device for f in frame.get_features()] for frame in frames])
raise (e)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor error handling to provide more informative error messages and avoid generic exception catching.

-        except Exception as e:
-            print([[f.device for f in frame.get_features()] for frame in frames])
-            raise (e)
+        except RuntimeError as e:
+            print("Error during tensor concatenation:", e)
+            raise

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
try:
reid_features = torch.cat(
[frame.get_features() for frame in frames], dim=0
).unsqueeze(0)
except Exception as e:
print([[f.device for f in frame.get_features()] for frame in frames])
raise (e)
try:
reid_features = torch.cat(
[frame.get_features() for frame in frames], dim=0
).unsqueeze(0)
except RuntimeError as e:
print("Error during tensor concatenation:", e)
raise


if not self.persistent_tracking:
if self.verbose:
warnings.warn(f"Clearing Queue after tracking")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unnecessary warning message or add placeholders to the f-string.

-                warnings.warn(f"Clearing Queue after tracking")
+                warnings.warn("Clearing Queue after tracking")

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
warnings.warn(f"Clearing Queue after tracking")
warnings.warn("Clearing Queue after tracking")

tests/test_training.py Outdated Show resolved Hide resolved
biogtr/datasets/tracking_dataset.py Outdated Show resolved Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@aaprasad aaprasad merged commit 16add88 into main Apr 22, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants