You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While performing detection inference on a video, memory leak is observed even after resource cleanup. Memory usage increases over time during inference, eventually causing the process to slow down, while it should stay stable.
## Installations
!python -m pip install pyyaml
!pip install 'git+https://github.com/facebookresearch/detectron2.git'
!pip install opencv-python
!pip install torch
import cv2, os, numpy as np, tqdm, time, math, psutil, torch
from collections import defaultdict
from google.colab.patches import cv2_imshow
from tqdm import tqdm
from detectron2.utils.logger import setup_logger
setup_logger()
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.video_visualizer import VideoVisualizer
from detectron2.utils.visualizer import ColorMode, Visualizer
from detectron2.data import MetadataCatalog
# Set up configuration and initialize Detectron2 Object Detection predictor
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
predictor = DefaultPredictor(cfg)
cap = cv2.VideoCapture("/content/Input_Video.mp4") # Open video and initialize video writer
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
video_writer = cv2.VideoWriter("/content/Output_Video.mp4", cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height))
start_time = time.time()
frame_idx = 0
# Process Video
while cap.isOpened():
success, frame = cap.read()
if not success: break
outputs = predictor(frame)
boxes = outputs["instances"].to("cpu").pred_boxes if outputs["instances"].has("pred_boxes") else None
if boxes:
for box in boxes:
x1, y1, x2, y2 = map(int, box.tolist())
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
video_writer.write(frame)
frame_idx += 1
# Resource Cleanup
cap.release()
video_writer.release()
cv2.destroyAllWindows()
Observed Behaviour
Tracing the memory usage before & after the video inference: memory_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template.
The following information is missing: "Instructions To Reproduce the Issue and Full Logs";
Issue
While performing detection inference on a video, memory leak is observed even after resource cleanup. Memory usage increases over time during inference, eventually causing the process to slow down, while it should stay stable.
Observed Behaviour
Tracing the memory usage before & after the video inference:
memory_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
""Video Inference""
Output:
Memory difference: 621.10 MB
Expected behavior:
For purely detection-based inference, the memory usage should remain constant for frames of similar resolution.
Environment:
Detectron2 version: 0.6
Python version: 3.10.12
OS: Ubuntu 22.04
GPU: NVIDIA-SMI 535.104.05 , CUDA 12.2
The text was updated successfully, but these errors were encountered: