Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up logging #63

Merged
merged 3 commits into from
Jul 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions dreem/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
"""Top-level package for dreem."""

import logging.config
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unused import logging.config.

- import logging.config
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import logging.config
Tools
Ruff

3-3: logging.config imported but unused; consider removing, adding to __all__, or using a redundant alias (F401)

from dreem.version import __version__
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unused import dreem.version.__version__.

- from dreem.version import __version__
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from dreem.version import __version__
Tools
Ruff

4-4: dreem.version.__version__ imported but unused; consider removing, adding to __all__, or using a redundant alias (F401)


from dreem.models.global_tracking_transformer import GlobalTrackingTransformer
Expand All @@ -16,3 +17,18 @@
# from .training import run

from dreem.inference.tracker import Tracker
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unused import dreem.inference.tracker.Tracker.

- from dreem.inference.tracker import Tracker
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from dreem.inference.tracker import Tracker
Tools
Ruff

19-19: dreem.inference.tracker.Tracker imported but unused; consider removing, adding to __all__, or using a redundant alias (F401)



def setup_logging():
"""Setup logging based on `logging.yaml`."""
import logging
import yaml
import os

package_directory = os.path.dirname(os.path.abspath(__file__))

with open(os.path.join(package_directory, "..", "logging.yaml"), "r") as stream:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unnecessary open mode parameters from the file open function.

- with open(os.path.join(package_directory, "..", "logging.yaml"), "r") as stream:
+ with open(os.path.join(package_directory, "..", "logging.yaml")) as stream:
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with open(os.path.join(package_directory, "..", "logging.yaml"), "r") as stream:
with open(os.path.join(package_directory, "..", "logging.yaml")) as stream:
Tools
Ruff

30-30: Unnecessary open mode parameters (UP015)

Remove open mode parameters

logging_cfg = yaml.load(stream, Loader=yaml.FullLoader)

logging.config.dictConfig(logging_cfg)
logger = logging.getLogger("dreem")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove assignment to unused variable logger.

- logger = logging.getLogger("dreem")
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger = logging.getLogger("dreem")
Tools
Ruff

34-34: Local variable logger is assigned to but never used (F841)

Remove assignment to unused variable logger

2 changes: 0 additions & 2 deletions dreem/datasets/data_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,6 @@ def centroid_bbox(points: ArrayLike, anchors: list, crop_size: int) -> torch.Ten
Returns:
Bounding box in [y1, x1, y2, x2] format.
"""
print(anchors)
for anchor in anchors:
cx, cy = points[anchor][0], points[anchor][1]
if not np.isnan(cx):
Expand Down Expand Up @@ -120,7 +119,6 @@ def pose_bbox(points: np.ndarray, bbox_size: tuple[int] | int) -> torch.Tensor:
"""
if isinstance(bbox_size, int):
bbox_size = (bbox_size, bbox_size)
# print(points)

c = np.nanmean(points, axis=0)
bbox = torch.Tensor(
Expand Down
1 change: 0 additions & 1 deletion dreem/datasets/microscopy_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,6 @@ def get_instances(self, label_idx: list[int], frame_idx: list[int]) -> list[Fram

frames = []
for frame_id in frame_idx:
# print(i)
instances, gt_track_ids, centroids = [], [], []

img = (
Expand Down
8 changes: 6 additions & 2 deletions dreem/datasets/sleap_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@
import numpy as np
import sleap_io as sio
import random
import warnings
import logging
from dreem.io import Instance, Frame
from dreem.datasets import data_utils, BaseDataset
from torchvision.transforms import functional as tvf

logger = logging.getLogger("dreem.datasets")


class SleapDataset(BaseDataset):
"""Dataset for loading animal behavior data from sleap."""
Expand Down Expand Up @@ -165,7 +167,9 @@ def get_instances(self, label_idx: list[int], frame_idx: list[int]) -> list[Fram
img = np.expand_dims(img, 0)
h, w, c = img.shape
except IndexError as e:
print(f"Could not read frame {frame_ind} from {video_name} due to {e}")
logger.warning(
f"Could not read frame {frame_ind} from {video_name} due to {e}"
)
continue

if len(img.shape) == 2:
Expand Down
16 changes: 7 additions & 9 deletions dreem/inference/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,11 @@
import numpy as np
import motmetrics as mm
import torch
from typing import Iterable
import pandas as pd
import logging
from typing import Iterable

logger = logging.getLogger("dreem.inference")

# from dreem.inference.post_processing import _pairwise_iou
# from dreem.inference.boxes import Boxes
Expand Down Expand Up @@ -39,8 +42,8 @@ def get_matches(frames: list["dreem.io.Frame"]) -> tuple[dict, list, int]:
matches[match] = np.full(len(frames), 0)

matches[match][idx] = 1
# else:
# warnings.warn("No instances detected!")
else:
logger.debug("No instances detected!")
return matches, indices, video_id


Expand Down Expand Up @@ -191,12 +194,7 @@ def to_track_eval(frames: list["dreem.io.Frame"]) -> dict:
data["num_gt_ids"] = len(unique_gt_ids)
data["num_tracker_dets"] = num_tracker_dets
data["num_gt_dets"] = num_gt_dets
try:
data["gt_ids"] = gt_ids
# print(data['gt_ids'])
except Exception as e:
print(gt_ids)
raise (e)
data["gt_ids"] = gt_ids
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reconsider removing error handling.

The removal of the try-except block for handling errors during the assignment of gt_ids to the data dictionary can lead to unhandled exceptions and potential crashes. Ensure that the inputs are always valid or re-add the error handling to manage possible exceptions gracefully.

-    data["gt_ids"] = gt_ids
+    try:
+        data["gt_ids"] = gt_ids
+    except Exception as e:
+        logger.error(f"Error assigning gt_ids: {e}")
+        raise
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
data["gt_ids"] = gt_ids
try:
data["gt_ids"] = gt_ids
except Exception as e:
logger.error(f"Error assigning gt_ids: {e}")
raise

data["tracker_ids"] = track_ids
data["similarity_scores"] = similarity_scores
data["num_timesteps"] = len(frames)
Expand Down
23 changes: 11 additions & 12 deletions dreem/inference/track.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,16 @@
from dreem.models import GTRRunner
from omegaconf import DictConfig
from pathlib import Path
from pprint import pprint

import hydra
import os
import pandas as pd
import pytorch_lightning as pl
import torch
import sleap_io as sio
import logging

logger = logging.getLogger("dreem.inference")


def export_trajectories(
Expand Down Expand Up @@ -76,16 +78,13 @@ def track(
for frame in batch:
lf, tracks = frame.to_slp(tracks)
if frame.frame_id.item() == 0:
print(f"Video: {lf.video}")
logger.info(f"Video: {lf.video}")
vid_trajectories[frame.video_id.item()].append(lf)

for vid_id, video in vid_trajectories.items():
if len(video) > 0:
try:
vid_trajectories[vid_id] = sio.Labels(video)
except AttributeError as e:
print(video[0].video)
raise (e)

vid_trajectories[vid_id] = sio.Labels(video)

return vid_trajectories

Expand All @@ -106,7 +105,7 @@ def run(cfg: DictConfig) -> dict[int, sio.Labels]:
except KeyError:
index = input("Pod Index Not found! Please choose a pod index: ")

print(f"Pod Index: {index}")
logger.info(f"Pod Index: {index}")

checkpoints = pd.read_csv(cfg.checkpoints)
checkpoint = checkpoints.iloc[index]
Expand All @@ -115,10 +114,10 @@ def run(cfg: DictConfig) -> dict[int, sio.Labels]:

model = GTRRunner.load_from_checkpoint(checkpoint)
tracker_cfg = pred_cfg.get_tracker_cfg()
print("Updating tracker hparams")
logger.info("Updating tracker hparams")
model.tracker_cfg = tracker_cfg
print(f"Using the following params for tracker:")
pprint(model.tracker_cfg)
logger.info(f"Using the following params for tracker:")
logger.info(model.tracker_cfg)
Comment on lines +119 to +120
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove extraneous f prefix from logging statement.

- logger.info(f"Using the following params for tracker:")
+ logger.info("Using the following params for tracker:")
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger.info(f"Using the following params for tracker:")
logger.info(model.tracker_cfg)
logger.info("Using the following params for tracker:")
logger.info(model.tracker_cfg)
Tools
Ruff

119-119: f-string without any placeholders (F541)

Remove extraneous f prefix


dataset = pred_cfg.get_dataset(mode="test")
dataloader = pred_cfg.get_dataloader(dataset, mode="test")
Expand All @@ -139,7 +138,7 @@ def run(cfg: DictConfig) -> dict[int, sio.Labels]:
if os.path.exists(outpath):
run_num += 1
outpath = outpath.replace(f".v{run_num-1}", f".v{run_num}")
print(f"Saving {preds} to {outpath}")
logger.info(f"Saving {preds} to {outpath}")
pred.save(outpath)

return preds
Expand Down
30 changes: 15 additions & 15 deletions dreem/inference/track_queue.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
"""Module handling sliding window tracking."""

import warnings
from dreem.io import Frame
from collections import deque
import numpy as np
from torch import device

import logging
import numpy as np

logger = logging.getLogger("dreem.inference")


class TrackQueue:
"""Class handling track local queue system for sliding window.
Expand Down Expand Up @@ -175,7 +178,7 @@ def end_tracks(self, track_id: int | None = None) -> bool:
self._queues.pop(track_id)
self._curr_gap.pop(track_id)
except KeyError:
print(f"Track ID {track_id} not found in queue!")
logger.exception(f"Track ID {track_id} not found in queue!")
return False
return True

Expand Down Expand Up @@ -211,10 +214,9 @@ def add_frame(self, frame: Frame) -> None:
) # dumb work around to retain `img_shape`
self.curr_track = pred_track_id

if self.verbose:
warnings.warn(
f"New track = {pred_track_id} on frame {frame_id}! Current number of tracks = {self.n_tracks}"
)
logger.debug(
f"New track = {pred_track_id} on frame {frame_id}! Current number of tracks = {self.n_tracks}"
)

else:
self._queues[pred_track_id].append((*frame_meta, instance))
Expand Down Expand Up @@ -288,10 +290,9 @@ def increment_gaps(self, pred_track_ids: list[int]) -> dict[int, bool]:
for track in self._curr_gap:
if track not in pred_track_ids:
self._curr_gap[track] += 1
if self.verbose:
warnings.warn(
f"Track {track} has not been seen for {self._curr_gap[track]} frames."
)
logger.debug(
f"Track {track} has not been seen for {self._curr_gap[track]} frames."
)
else:
self._curr_gap[track] = 0
if self._curr_gap[track] >= self.max_gap:
Expand All @@ -301,10 +302,9 @@ def increment_gaps(self, pred_track_ids: list[int]) -> dict[int, bool]:

for track, gap_exceeded in exceeded_gap.items():
if gap_exceeded:
if self.verbose:
warnings.warn(
f"Track {track} has not been seen for {self._curr_gap[track]} frames! Terminating Track...Current number of tracks = {self.n_tracks}."
)
logger.debug(
f"Track {track} has not been seen for {self._curr_gap[track]} frames! Terminating Track...Current number of tracks = {self.n_tracks}."
)
self._queues.pop(track)
self._curr_gap.pop(track)

Expand Down
Loading
Loading