Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak During Detection Inference on Videos #5408

Open
Pratyasa123 opened this issue Dec 18, 2024 · 1 comment
Open

Memory Leak During Detection Inference on Videos #5408

Pratyasa123 opened this issue Dec 18, 2024 · 1 comment

Comments

@Pratyasa123
Copy link

Pratyasa123 commented Dec 18, 2024

Issue

While performing detection inference on a video, memory leak is observed even after resource cleanup. Memory usage increases over time during inference, eventually causing the process to slow down, while it should stay stable.

## Installations
!python -m pip install pyyaml
!pip install 'git+https://github.com/facebookresearch/detectron2.git'
!pip install opencv-python
!pip install torch
import cv2, os, numpy as np, tqdm, time, math, psutil, torch
from collections import defaultdict
from google.colab.patches import cv2_imshow
from tqdm import tqdm
from detectron2.utils.logger import setup_logger
setup_logger()
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.video_visualizer import VideoVisualizer
from detectron2.utils.visualizer import ColorMode, Visualizer
from detectron2.data import MetadataCatalog

 # Set up configuration and initialize Detectron2 Object Detection predictor
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
predictor = DefaultPredictor(cfg)

cap = cv2.VideoCapture("/content/Input_Video.mp4") # Open video and initialize video writer
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
video_writer = cv2.VideoWriter("/content/Output_Video.mp4", cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height))

start_time = time.time()
frame_idx = 0

# Process Video
while cap.isOpened(): 
    success, frame = cap.read()
    if not success: break

    outputs = predictor(frame)
    boxes = outputs["instances"].to("cpu").pred_boxes if outputs["instances"].has("pred_boxes") else None
    if boxes:
        for box in boxes:
            x1, y1, x2, y2 = map(int, box.tolist())
            cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
    video_writer.write(frame)
    frame_idx += 1

# Resource Cleanup 
cap.release() 
video_writer.release()
cv2.destroyAllWindows()

Observed Behaviour

Tracing the memory usage before & after the video inference:
memory_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)

""Video Inference""

memory_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(f"Memory difference: {memory_after - memory_before:.2f} MB")

Output:
Memory difference: 621.10 MB

Expected behavior:

For purely detection-based inference, the memory usage should remain constant for frames of similar resolution.

Environment:

Detectron2 version: 0.6
Python version: 3.10.12
OS: Ubuntu 22.04
GPU: NVIDIA-SMI 535.104.05 , CUDA 12.2

@github-actions github-actions bot added the needs-more-info More info is needed to complete the issue label Dec 18, 2024
Copy link

You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template.
The following information is missing: "Instructions To Reproduce the Issue and Full Logs";

@github-actions github-actions bot removed the needs-more-info More info is needed to complete the issue label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant