Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how can I count the objects using yolov5? #2696

Closed
nimashee opened this issue Apr 3, 2021 · 18 comments
Closed

how can I count the objects using yolov5? #2696

nimashee opened this issue Apr 3, 2021 · 18 comments
Labels
question Further information is requested

Comments

@nimashee
Copy link

nimashee commented Apr 3, 2021

anyone can help me, how to count the object using yolov5?

@nimashee nimashee added the question Further information is requested label Apr 3, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Apr 3, 2021

👋 Hello @nimashee, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@adrianholovaty
Copy link
Contributor

When you run inference (e.g., via detect.py), you'll get a list of all the objects detected. From then, you can write some code that examines all of the detected objects and counts how many there are.

@nimashee
Copy link
Author

nimashee commented Apr 5, 2021

When you run inference (e.g., via detect.py), you'll get a list of all the objects detected. From then, you can write some code that examines all of the detected objects and counts how many there are.

I have wrote this code:

class_name_count = 'palm'
            l = s[1:s.find(class_name_count)].split()[-1]
            if class_name_count in s:
                print(l,class_name_count)
                cv2.rectangle(im0, (0,0), (1100, 250), -1)
                cv2.putText(im0,1+class_name_count,(0,200), cv2.FONT_HERSHEY_SIMPLEX, 8,(255,255,255),24,cv2.LINE_AA)

but the output:

TypeError: unsupported operand type(s) for +: 'int' and 'str'

@SpongeBab
Copy link
Contributor

SpongeBab commented Apr 5, 2021

When you run inference (e.g., via detect.py), you'll get a list of all the objects detected. From then, you can write some code that examines all of the detected objects and counts how many there are.

I have wrote this code:

class_name_count = 'palm'
            l = s[1:s.find(class_name_count)].split()[-1]
            if class_name_count in s:
                print(l,class_name_count)
                cv2.rectangle(im0, (0,0), (1100, 250), -1)
                cv2.putText(im0,1+class_name_count,(0,200), cv2.FONT_HERSHEY_SIMPLEX, 8,(255,255,255),24,cv2.LINE_AA)

but the output:

TypeError: unsupported operand type(s) for +: 'int' and 'str'

1 is num ,class_name_count is string

How can they add up...

@nimashee
Copy link
Author

nimashee commented Apr 7, 2021

@xiaoxiaopeng1998 I think so. But, I follow this tutorial in youtube and got that error

@glenn-jocher
Copy link
Member

@nimashee nice video! You can count all objects very simply, you just look at the shape of the detections for each image. There is one object per row, i.e.:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
print(results.pandas().xyxy[0])

image 1/2: 720x1280 2 persons, 1 tie
image 2/2: 1080x810 4 persons, 1 bus
Speed: 110.3ms pre-process, 7.0ms inference, 1.1ms NMS per image at shape (2, 3, 640, 640)
    xcenter   ycenter     width    height  confidence  class    name
0  0.746094  0.522222  0.318750  0.923611    0.817871      0  person
1  0.445898  0.637326  0.745703  0.697569    0.577637      0  person
2  0.367578  0.795833  0.071875  0.400000    0.569336     27     tie

@nimashee
Copy link
Author

nimashee commented Apr 8, 2021

@glenn-jocher thank you so much for your answer. I have trained my custom dataset before then I tried your code, it didn't get any error but seems not to detect the shape. So there's no object counted. Should I train my data again or any other solution?
Screenshot from 2021-04-08 16-35-50

@glenn-jocher
Copy link
Member

glenn-jocher commented Apr 8, 2021

@nimashee yes everything works correctly in your screenshot, the model simply does not detect anything. You can lower your confidence threshold to increase recall if you'd like, or you may want to retrain your model for better results. See Tips for Best Training Results for improving training performance.

model.conf = 0.01  # default 0.25
results = model(imgs)

YOLOv5 Tutorials

@nimashee
Copy link
Author

nimashee commented Apr 8, 2021

@nimashee yes everything works correctly in your screenshot, the model simply does not detect anything. You can lower your confidence threshold to increase recall if you'd like, or you may want to retrain your model for better results. See Tips for Best Training Results for improving training performance.

model.conf = 0.01  # default 0.25
results = model(imgs)

YOLOv5 Tutorials

Thank you so much. It works for me. Have a great day! ^^

@nimashee nimashee closed this as completed Apr 8, 2021
@MilesJoseph
Copy link

@nimashee nice video! You can count all objects very simply, you just look at the shape of the detections for each image. There is one object per row, i.e.:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
print(results.pandas().xyxy[0])

image 1/2: 720x1280 2 persons, 1 tie
image 2/2: 1080x810 4 persons, 1 bus
Speed: 110.3ms pre-process, 7.0ms inference, 1.1ms NMS per image at shape (2, 3, 640, 640)
    xcenter   ycenter     width    height  confidence  class    name
0  0.746094  0.522222  0.318750  0.923611    0.817871      0  person
1  0.445898  0.637326  0.745703  0.697569    0.577637      0  person
2  0.367578  0.795833  0.071875  0.400000    0.569336     27     tie

hey @glenn-jocher , a few questions I have on this.... Does the results file timestamp, if not do you have any sample code to do this? Do you have any samples for export elsewhere? Do you folks have official github/article for deploying on xavier agx?

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 25, 2022

@MilesJoseph with the PyTorch Hub inference example you've shown above you have unlimited customization options to export/save or log results any way you want.

We don't have official Xavier AGX tutorial, but once you've exported to TensorRT (use --half for FP16) you can pass the TRT model directly for use the same way as your example above.

YOLOv5 🚀 PyTorch Hub models allow for simple model loading and inference in a pure python environment without using detect.py.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # yolov5n - yolov5x6 official model
#                                            'custom', 'path/to/best.pt')  # custom model

# Images
im = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, URL, PIL, OpenCV, numpy, list

# Inference
results = model(im)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.
results.xyxy[0]  # im predictions (tensor)

results.pandas().xyxy[0]  # im predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

results.pandas().xyxy[0].value_counts('name')  # class counts (pandas)
# person    2
# tie       1

See YOLOv5 PyTorch Hub Tutorial for details.

Good luck 🍀 and let us know if you have any other questions!

@alxgrod
Copy link

alxgrod commented Dec 1, 2022

@nimashee nice video! You can count all objects very simply, you just look at the shape of the detections for each image. There is one object per row, i.e.:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
print(results.pandas().xyxy[0])

image 1/2: 720x1280 2 persons, 1 tie
image 2/2: 1080x810 4 persons, 1 bus
Speed: 110.3ms pre-process, 7.0ms inference, 1.1ms NMS per image at shape (2, 3, 640, 640)
    xcenter   ycenter     width    height  confidence  class    name
0  0.746094  0.522222  0.318750  0.923611    0.817871      0  person
1  0.445898  0.637326  0.745703  0.697569    0.577637      0  person
2  0.367578  0.795833  0.071875  0.400000    0.569336     27     tie

Is it possible to count total objects detected in a video? For example, counting number of cars in the street would count the ones present in the frame and then it would change for the number of cars in the next frame, instead I am trying to add up all objects detected in the video without duplicating the cars that are present in more than one frame. Your help would be greatly appreciated.

@Sarouch
Copy link

Sarouch commented Jan 3, 2023

@nimashee nice video! You can count all objects very simply, you just look at the shape of the detections for each image. There is one object per row, i.e.:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
print(results.pandas().xyxy[0])

image 1/2: 720x1280 2 persons, 1 tie
image 2/2: 1080x810 4 persons, 1 bus
Speed: 110.3ms pre-process, 7.0ms inference, 1.1ms NMS per image at shape (2, 3, 640, 640)
    xcenter   ycenter     width    height  confidence  class    name
0  0.746094  0.522222  0.318750  0.923611    0.817871      0  person
1  0.445898  0.637326  0.745703  0.697569    0.577637      0  person
2  0.367578  0.795833  0.071875  0.400000    0.569336     27     tie

Is it possible to count total objects detected in a video? For example, counting number of cars in the street would count the ones present in the frame and then it would change for the number of cars in the next frame, instead I am trying to add up all objects detected in the video without duplicating the cars that are present in more than one frame. Your help would be greatly appreciated.

Hi @alxgrod , did you find a solution for your question ? I want to do the same thing, sum up all object of the same class at the end of the video.

@phoenixstar7
Copy link

Hi @alxgrod, I am able to get the conf_score per frame per object and then I am using it for my analysis. What I need is to count the number of objects detected per frame as well so I can dump all of them in a log csv file. Any pointers/help is greatly appreciated. TIA!

@Blazensei
Copy link

Blazensei commented Jan 13, 2023

Hi sir @glenn-jocher how can this code be used in detect.py in yolov5 instance segmentation and counting object that is present in the video I have only 1 class in my trained data and I want to detect and count every object on it Thankyou sir. About this in your study sir https://github.com/ultralytics/yolov5/blob/master/detect.py

@Sarouch
Copy link

Sarouch commented Jan 13, 2023

Hi sir @glenn-jocher how can this code be used in detect.py in yolov5 instance segmentation and counting object that is present in the video I have only 1 class in my trained data and I want to detect and count every object on it Thankyou sir. About this in your study sir https://github.com/ultralytics/yolov5/blob/master/detect.py

Hi @Blazensei, you can see my question/answer here it may help you.

@Sarouch
Copy link

Sarouch commented Jan 13, 2023

Hi @alxgrod, I am able to get the conf_score per frame per object and then I am using it for my analysis. What I need is to count the number of objects detected per frame as well so I can dump all of them in a log csv file. Any pointers/help is greatly appreciated. TIA!

Hi @phoenixstar7 you can see my question and answer here on stackoverflow it may help you, you can ask other question if it is not clear

@Bhavik-Ardeshna
Copy link

Bhavik-Ardeshna commented Apr 12, 2023

@Sarouch @alxgrod @nimashee nice video! You can count all objects very simply, you just look at the shape of the detections for each image. There is one object per row, i.e.:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
print(results.pandas().xyxy[0])

image 1/2: 720x1280 2 persons, 1 tie
image 2/2: 1080x810 4 persons, 1 bus
Speed: 110.3ms pre-process, 7.0ms inference, 1.1ms NMS per image at shape (2, 3, 640, 640)
    xcenter   ycenter     width    height  confidence  class    name
0  0.746094  0.522222  0.318750  0.923611    0.817871      0  person
1  0.445898  0.637326  0.745703  0.697569    0.577637      0  person
2  0.367578  0.795833  0.071875  0.400000    0.569336     27     tie

Is it possible to count total objects detected in a video? For example, counting number of cars in the street would count the ones present in the frame and then it would change for the number of cars in the next frame, instead I am trying to add up all objects detected in the video without duplicating the cars that are present in more than one frame. Your help would be greatly appreciated.

Hi @alxgrod , did you find a solution for your question ? I want to do the same thing, sum up all object of the same class at the end of the video (Unique objects only).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

10 participants