-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor inference script #51
Merged
Merged
Changes from all commits
Commits
Show all changes
42 commits
Select commit
Hold shift + click to select a range
8ddc557
add back-reference to frame as attribute for instane
aaprasad 7887d56
separate get_boxes_times into two functions and use `Instances` as input
aaprasad 9bab7bc
use instances as input into model instead of frames
aaprasad 82293d2
create io module, move config, visualize there. abstract `Frame` and …
aaprasad b4049b8
refactor `Frame` and `Instance` initialization to use `attrs` instead…
aaprasad 2b0bc55
add doc strings, fix small bugs
aaprasad ef26012
Implement AssociationMatrix class for handling model output
aaprasad 94a0e61
create io module, move config, visualize there. abstract `Frame` and …
aaprasad c4bc0fb
refactor `Frame` and `Instance` initialization to use `attrs` instead…
aaprasad 42f8a8c
add doc strings, fix small bugs
aaprasad b5f39b4
Implement AssociationMatrix class for handling model output
aaprasad ccd523a
Merge remote-tracking branch 'origin/aadi/refactor-data-structures' i…
aaprasad 0f535af
fix overwrites from merge
aaprasad 56e038a
store model outputs in association matrix
aaprasad 8a71d6d
add track object for storing tracklets
aaprasad 766820b
add reduction function to association matrix
aaprasad c0ceac1
add doc_strings
aaprasad 92095e0
fix tests, docstrings
aaprasad a6a6ace
add spatial/temporal embeddings as attribute to `Instance`
aaprasad 56e0555
fix typo
aaprasad 80400fc
add `from_slp` converters
aaprasad 7ff22e7
fix docstrings
aaprasad 89007a9
store embeddings in Instance object instead of returning
aaprasad cbf915a
only keep visualize in io
aaprasad b3c5661
remove mutable types from default arguments. Don't use kwargs unless …
aaprasad adb3715
handle edge case where ckpt_path is not in config
aaprasad 557d4e9
expose appropriate modules in respective `__init__.py`
aaprasad 1013847
separate `files` into `vid_files`, `label_files` for finer grained co…
aaprasad e030da7
fix edge case for get trainer when trainer params don't exist
aaprasad f86317c
fix `to_slp` bugs stemming from type change
aaprasad ed85a40
use tmp dir for tests
aaprasad cf88413
refactor inference script
aaprasad 9f20e48
add logic to handle directory paths instead of only file paths
aaprasad 7e9eb65
add `from_yaml` classmethod for direct config loading
aaprasad d00a9f4
add documentation for cli calls
aaprasad 5b8ad52
fix small typo + docstrings
aaprasad 9d4d24a
fix docstring typo
aaprasad e1f7c66
fix small edge case when initializing new tracks
aaprasad bd724ec
Update biogtr/datasets/base_dataset.py
talmo f5b766b
lint
aaprasad 06a4500
Merge branch 'main' into aadi/refactor-inference
aaprasad f725848
lint post-merge
aaprasad File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -11,6 +11,7 @@ | |||||
import pandas as pd | ||||||
import pytorch_lightning as pl | ||||||
import torch | ||||||
import sleap_io as sio | ||||||
|
||||||
|
||||||
def export_trajectories(frames_pred: list["biogtr.io.Frame"], save_path: str = None): | ||||||
|
@@ -50,60 +51,45 @@ def export_trajectories(frames_pred: list["biogtr.io.Frame"], save_path: str = N | |||||
return save_df | ||||||
|
||||||
|
||||||
def inference( | ||||||
model: GTRRunner, dataloader: torch.utils.data.DataLoader | ||||||
def track( | ||||||
model: GTRRunner, trainer: pl.Trainer, dataloader: torch.utils.data.DataLoader | ||||||
) -> list[pd.DataFrame]: | ||||||
"""Run Inference. | ||||||
|
||||||
Args: | ||||||
model: model loaded from checkpoint used for inference | ||||||
model: GTRRunner model loaded from checkpoint used for inference | ||||||
trainer: lighting Trainer object used for handling inference log. | ||||||
dataloader: dataloader containing inference data | ||||||
|
||||||
Return: | ||||||
List of DataFrames containing prediction results for each video | ||||||
""" | ||||||
num_videos = len(dataloader.dataset.slp_files) | ||||||
trainer = pl.Trainer(devices=1, limit_predict_batches=3) | ||||||
num_videos = len(dataloader.dataset.vid_files) | ||||||
preds = trainer.predict(model, dataloader) | ||||||
|
||||||
vid_trajectories = [[] for i in range(num_videos)] | ||||||
vid_trajectories = {i: [] for i in range(num_videos)} | ||||||
|
||||||
tracks = {} | ||||||
for batch in preds: | ||||||
for frame in batch: | ||||||
vid_trajectories[frame.video_id].append(frame) | ||||||
lf, tracks = frame.to_slp(tracks) | ||||||
if frame.frame_id.item() == 0: | ||||||
print(f"Video: {lf.video}") | ||||||
vid_trajectories[frame.video_id.item()].append(lf) | ||||||
|
||||||
saved = [] | ||||||
|
||||||
for video in vid_trajectories: | ||||||
for vid_id, video in vid_trajectories.items(): | ||||||
if len(video) > 0: | ||||||
save_dict = {} | ||||||
video_ids = [] | ||||||
frame_ids = [] | ||||||
X, Y = [], [] | ||||||
pred_track_ids = [] | ||||||
for frame in video: | ||||||
for i, instance in frame.instances: | ||||||
video_ids.append(frame.video_id.item()) | ||||||
frame_ids.append(frame.frame_id.item()) | ||||||
bbox = instance.bbox | ||||||
y = (bbox[2] + bbox[0]) / 2 | ||||||
x = (bbox[3] + bbox[1]) / 2 | ||||||
X.append(x.item()) | ||||||
Y.append(y.item()) | ||||||
pred_track_ids.append(instance.pred_track_id.item()) | ||||||
save_dict["Video"] = video_ids | ||||||
save_dict["Frame"] = frame_ids | ||||||
save_dict["X"] = X | ||||||
save_dict["Y"] = Y | ||||||
save_dict["Pred_track_id"] = pred_track_ids | ||||||
save_df = pd.DataFrame(save_dict) | ||||||
saved.append(save_df) | ||||||
|
||||||
return saved | ||||||
try: | ||||||
vid_trajectories[vid_id] = sio.Labels(video) | ||||||
except AttributeError as e: | ||||||
print(video[0].video) | ||||||
raise (e) | ||||||
|
||||||
return vid_trajectories | ||||||
|
||||||
|
||||||
@hydra.main(config_path="configs", config_name=None, version_base=None) | ||||||
def main(cfg: DictConfig): | ||||||
def run(cfg: DictConfig) -> dict[int, sio.Labels]: | ||||||
"""Run inference based on config file. | ||||||
|
||||||
Args: | ||||||
|
@@ -116,37 +102,56 @@ def main(cfg: DictConfig): | |||||
index = int(os.environ["POD_INDEX"]) | ||||||
# For testing without deploying a job on runai | ||||||
except KeyError: | ||||||
print("Pod Index Not found! Setting index to 0") | ||||||
index = 0 | ||||||
index = input("Pod Index Not found! Please choose a pod index: ") | ||||||
|
||||||
print(f"Pod Index: {index}") | ||||||
|
||||||
checkpoints = pd.read_csv(cfg.checkpoints) | ||||||
checkpoint = checkpoints.iloc[index] | ||||||
else: | ||||||
checkpoint = pred_cfg.get_ckpt_path() | ||||||
checkpoint = pred_cfg.cfg.ckpt_path | ||||||
|
||||||
model = GTRRunner.load_from_checkpoint(checkpoint) | ||||||
tracker_cfg = pred_cfg.get_tracker_cfg() | ||||||
print("Updating tracker hparams") | ||||||
model.tracker_cfg = tracker_cfg | ||||||
print(f"Using the following params for tracker:") | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Remove unnecessary f-string. - print(f"Using the following params for tracker:")
+ print("Using the following params for tracker:") Committable suggestion
Suggested change
ToolsRuff
|
||||||
pprint(model.tracker_cfg) | ||||||
dataset = pred_cfg.get_dataset(mode="test") | ||||||
|
||||||
dataset = pred_cfg.get_dataset(mode="test") | ||||||
dataloader = pred_cfg.get_dataloader(dataset, mode="test") | ||||||
preds = inference(model, dataloader) | ||||||
for i, pred in enumerate(preds): | ||||||
print(pred) | ||||||
outdir = pred_cfg.cfg.outdir if "outdir" in pred_cfg.cfg else "./results" | ||||||
os.makedirs(outdir, exist_ok=True) | ||||||
|
||||||
trainer = pred_cfg.get_trainer() | ||||||
|
||||||
preds = track(model, trainer, dataloader) | ||||||
|
||||||
outdir = pred_cfg.cfg.outdir if "outdir" in pred_cfg.cfg else "./results" | ||||||
os.makedirs(outdir, exist_ok=True) | ||||||
|
||||||
run_num = 0 | ||||||
for i, pred in preds.items(): | ||||||
outpath = os.path.join( | ||||||
outdir, | ||||||
f"{Path(pred_cfg.cfg.dataset.test_dataset.slp_files[i]).stem}_tracking_results", | ||||||
f"{Path(dataloader.dataset.label_files[i]).stem}.biogtr_inference.v{run_num}.slp", | ||||||
) | ||||||
print(f"Saving to {outpath}") | ||||||
# TODO: Figure out how to overwrite sleap labels instance labels w pred instance labels then save as a new slp file | ||||||
pred.to_csv(outpath, index=False) | ||||||
if os.path.exists(outpath): | ||||||
run_num += 1 | ||||||
outpath = outpath.replace(f".v{run_num-1}", f".v{run_num}") | ||||||
print(f"Saving {preds} to {outpath}") | ||||||
pred.save(outpath) | ||||||
|
||||||
return preds | ||||||
|
||||||
|
||||||
if __name__ == "__main__": | ||||||
main() | ||||||
# example calls: | ||||||
|
||||||
# train with base config: | ||||||
# python train.py --config-dir=./configs --config-name=inference | ||||||
|
||||||
# override with params config: | ||||||
# python train.py --config-dir=./configs --config-name=inference +params_config=configs/params.yaml | ||||||
|
||||||
# override with params config, and specific params: | ||||||
# python train.py --config-dir=./configs --config-name=inference +params_config=configs/params.yaml dataset.train_dataset.padding=10 | ||||||
run() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix undefined name error.
Committable suggestion
Tools
Ruff