-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aadi/sample-efficiency #24
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## aadi-device-fix #24 +/- ##
===================================================
+ Coverage 75.39% 75.65% +0.25%
===================================================
Files 24 24
Lines 1508 1532 +24
===================================================
+ Hits 1137 1159 +22
- Misses 371 373 +2 ☔ View full report in Codecov by Sentry. |
Co-authored-by: aaprasad <aaprasad.ucsd.edu> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Important Auto Review SkippedAuto reviews are disabled on base/target branches other than the default branch. Please add the base/target branch pattern to the list of additional branches to be reviewed in the settings. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe recent updates focus on enhancing the BioGTR framework by refining data structures, improving dataset handling, and optimizing inference and tracking functionalities. Major enhancements include the integration of new data classes for better data representation and management, adjustments in dataset modules for efficient data processing, and significant refinements in inference methods to support robust tracking and evaluation. Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
@coderabbitai review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Actionable comments outside the diff hunks (1)
biogtr/models/attention_head.py (1)
Line range hint
9-9
: Consider addressing the TODO about adding named tensors.Would you like me to help implement this feature or should I open a GitHub issue to track this task?
@@ -80,7 +81,7 @@ def main(cfg: DictConfig): | |||
if cfg.view_batch.no_train: | |||
return | |||
|
|||
model = train_cfg.get_gtr_runner() | |||
model = train_cfg.get_gtr_runner() # TODO see if we can use torch.compile() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider addressing the TODO about using torch.compile()
.
Would you like me to help implement this feature or should I open a GitHub issue to track this task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open an issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aaprasad, I've successfully opened a GitHub issue to track the task of investigating and potentially implementing torch.compile()
in train.py
. You can view and manage the issue here. If there's anything else you need, feel free to let me know!
Analysis chain
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Create a GitHub issue regarding the implementation of `torch.compile()` in `train.py`
gh issue create --title "Investigate and potentially implement \`torch.compile()\` in \`train.py\`" --body "There is a TODO comment in the \`train.py\` script suggesting the potential use of \`torch.compile()\`. This issue is to investigate whether \`torch.compile()\` can be beneficial for the project and, if so, to plan its implementation.
Related PR: https://github.com/talmolab/biogtr/pull/24
Comment URL: https://github.com/talmolab/biogtr/pull/24#discussion_r1575428228" --repo talmolab/biogtr
Length of output: 538
Add functionality to randomly sample chunks from dataset to be used for training to enable sample-efficiency experiments
Summary by CodeRabbit
New Features
Instance
andFrame
for enhanced data handling in tracking applications.EvalDataset
class for merging datasets during evaluation.TrackQueue
class for managing tracking queues.Enhancements
Bug Fixes
Documentation
Refactor
Tests
Chores