Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors #8595

Open
AlexeyAB opened this issue Jul 7, 2022 · 47 comments

Comments

@AlexeyAB
Copy link
Owner

AlexeyAB commented Jul 7, 2022

Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors


YOLOv7:


YOLOv7x:


YOLOv7-tiny-leaky-relu:

Darknet cfg/weights file - currently tested for inference only:

Test FPS:

  • without NMS: darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -benchmark

  • with NMS: darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -dont_show


YOLOv7 is more accurate and faster than YOLOv5 by 120% FPS, than YOLOX by 180% FPS, than Dual-Swin-T by 1200% FPS, than ConvNext by 550% FPS, than SWIN-L by 500% FPS, than PPYOLOE-X by 150% FPS.

YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100, batch=1.

  • YOLOv7-e6 (55.9% AP, 56 FPS V100 b=1) by +500% FPS faster than SWIN-L C-M-RCNN (53.9% AP, 9.2 FPS A100 b=1)
  • YOLOv7-e6 (55.9% AP, 56 FPS V100 b=1) by +550% FPS faster than ConvNeXt-XL C-M-RCNN (55.2% AP, 8.6 FPS A100 b=1)
  • YOLOv7-w6 (54.6% AP, 84 FPS V100 b=1) by +120% FPS faster than YOLOv5-X6-r6.1 (55.0% AP, 38 FPS V100 b=1)
  • YOLOv7-w6 (54.6% AP, 84 FPS V100 b=1) by +1200% FPS faster than Dual-Swin-T C-M-RCNN (53.6% AP, 6.5 FPS V100 b=1)
  • YOLOv7x (52.9% AP, 114 FPS V100 b=1) by +150% FPS faster than PPYOLOE-X (51.9% AP, 45 FPS V100 b=1)
  • YOLOv7 (51.2% AP, 161 FPS V100 b=1) by +180% FPS faster than YOLOX-X (51.1% AP, 58 FPS V100 b=1)

more5


image


yolov7_640_1280

@AlexeyAB AlexeyAB pinned this issue Jul 7, 2022
@AlexeyAB AlexeyAB changed the title YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Jul 7, 2022
@stephanecharette
Copy link
Collaborator

Several questions:

  1. One of the huge advantages of Darknet/YOLOv[34] over other frameworks is how easy it is to incorporate into C++ applications. Will YOLOv7 be the same, with a v7.cfg file and libdarknet.so like usual, or is this new python repo where we can expect to find YOLOv7?
  2. Will we have the 3 usual configurations available -- YOLOv7, YOLOv7-tiny-3L, and YOLOv7-tiny?
  3. When can we expect the next release of Darknet?

@akashAD98
Copy link

akashAD98 commented Jul 7, 2022

@AlexeyAB @WongKinYiu thank you so much for the amazing work! very much excited for yolov7

@AlexeyAB
Copy link
Owner Author

AlexeyAB commented Jul 7, 2022

@stephanecharette Hi,

  1. YOLOv7 is mostly for Pytorch: https://github.com/WongKinYiu/yolov7

  2. There is at least YOLOv7-tiny for Darknet, but it was trained using Pytorch and converted to the Darknet. I didn't test training on Darknet yet, but it should work:

  1. I don't know.

darknet.exe detector test cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights -ext_output dog.jpg

predictions


Model inferernce time (without NMS)

darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -benchmark

image


Model inferernce time (with NMS)

darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -dont_show

image

@toplinuxsir
Copy link

@AlexeyAB Great work!
Waiting for yolov7 , yolov7x for darknet .

@stephanecharette
Copy link
Collaborator

stephanecharette commented Jul 8, 2022

@AlexeyAB Does YOLOv7 have new "things" in the .cfg file that requires changes to the C/C++ darknet code to run correctly? Or does the Darknet code from today have everything needed to use the new YOLOv7 configuration files?

@akashAD98
Copy link

akashAD98 commented Jul 8, 2022

@stephanecharette I tested yolov7-tiny.cfg on my custom data & its working fine.

@cenit
Copy link
Collaborator

cenit commented Jul 8, 2022

@AlexeyAB Does YOLOv7 have new "things" in the .cfg file that requires changes to the C/C++ darknet code to run correctly? Or does the Darknet code from today have everything needed to use the new YOLOv7 configuration files?

it should already work, please open an issue if you find any problem

@agjunyent
Copy link

Is there a possibility to get yolov7-tiny.cfg in yaml format? Want to test the new repo, and I can only use the cfg for darknet and there's no support yet on the yolov7 repo

@AlexeyAB
Copy link
Owner Author

@stephanecharette

Does YOLOv7 have new "things" in the .cfg file that requires changes to the C/C++ darknet code to run correctly? Or does the Darknet code from today have everything needed to use the new YOLOv7 configuration files?

Training should work using these yolov7-tiny.conv.87 and yolov7-tiny.cfg files: #8595 (comment)
./darknet detector train cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.conv.87

But accuracy could be lower than training on Pytorch code, because not all data augmentation and label-assignment approaches are implemented in the Darknet.

@AlexeyAB
Copy link
Owner Author

AlexeyAB commented Jul 10, 2022

@agjunyent

Is there a possibility to get yolov7-tiny.cfg in yaml format? Want to test the new repo, and I can only use the cfg for darknet and there's no support yet on the yolov7 repo

There is yolov7-tiny (leaky_relu) in yaml format: https://github.com/WongKinYiu/yolov7/blob/main/cfg/training/yolov7-tiny.yaml

You can try to train it using these hyper parameters: https://github.com/WongKinYiu/yolov7/blob/main/data/hyp.scratch.tiny.yaml

Just delete old train2017.cache and val2017.cache files (if you used old Pytorch-YOLO versions), and redownload labels for training: https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip

@invo-mwiseman
Copy link

invo-mwiseman commented Jul 12, 2022

@stephanecharette Hi,

  1. YOLOv7 is mostly for Pytorch: https://github.com/WongKinYiu/yolov7
  2. There is at least YOLOv7-tiny for Darknet, but it was trained using Pytorch and converted to the Darknet. I didn't test training on Darknet yet, but it should work:
  1. I don't know.

darknet.exe detector test cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights -ext_output dog.jpg

predictions

Model inferernce time (without NMS)

darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -benchmark

image

Model inferernce time (with NMS)

darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -dont_show

image

Those v7 weights when compared to a personal Yolov4-tiny set I've trained, using OpenCV 4.5.3 and CUDA 11.5 produce half the FPS.

v4-tiny:
image

v7-tiny:
image

Lost more than half the FPS. I've yet to test the accuracy between them, but the FPS trade off between a smaller increase in accuracy feels too much.

@liujin1975060601
Copy link

It is necessary to develop a tool to convert yaml to cfg both for traing and for inference.
We are hoping for yaml2cfg.
By C++ or by python

@liujin1975060601
Copy link

liujin1975060601 commented Jul 16, 2022

In the same batch, the performance of Darknet is better than that of pytorch, but the larger the batchsize is, the better the performance is.
Darknet does not support half precision well, which makes the batchsize cannot be turned up. We have tried that 64/8=8 is the largest batch that can be supported, but yolov5 can support batch=16.

Can we save video memory in half precision and make the batchsize larger like yolov5?

@prateekgml
Copy link

@AlexeyAB @WongKinYiu can you please update the repo with steps-'How to train with a custom dataset' as we have in Darknet's repo?

@clemtaylor
Copy link

clemtaylor commented Jul 19, 2022

I tried out the yolov7-tiny.weights from above and got some rather broken output. I've run various versions of yolvo[34] and have done GPU months of training, so I don't think this is a newbie fail. Very strange...

dog-predictions-yolov7-tiny

./darknet detector test coco.51.data yolov7-tiny.cfg yolov7-tiny.weights -ext_output -i 0 data/dog.jpg
 CUDA-version: 11070 (11070), cuDNN: 8.4.0, CUDNN_HALF=1, GPU count: 2  
 CUDNN_HALF=1 
 OpenCV version: 4.5.3
 0 : compute_capability = 860, cudnn_half = 1, GPU: NVIDIA GeForce RTX 3090 
net.optimized_memory = 0 
mini_batch = 1, batch = 1, time_steps = 1, train = 0 
   layer   filters  size/strd(dil)      input                output
   0 Create CUDA-stream - 0 
 Create cudnn-handle 0
conv     32       3 x 3/ 2    416 x 416 x   3 ->  208 x 208 x  32 0.075 BF
... more network config output ...
data/dog.jpg: Predicted in 3.992000 milli-seconds.
boat: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
person: 99%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
airplane: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
truck: 54%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
traffic light: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
fire hydrant: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
stop sign: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
parking meter: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
bird: 99%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
cat: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
horse: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
elephant: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
bear: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
handbag: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
suitcase: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
skis: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
kite: 100%	(left_x: -1407   top_y:  377   width: 2754   height:    0)
.... many many detections ....
sports ball: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
baseball glove: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
skateboard: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
tennis racket: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
wine glass: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
knife: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
banana: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
apple: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
orange: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
broccoli: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
toilet: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
laptop: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
remote: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
cell phone: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
oven: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
toaster: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
sink: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
refrigerator: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)
clock: 100%	(left_x:  798   top_y:  554   width:    0   height:    0)

The scores are all very high (mostly 100%) and the x/y coords and dimensions are 0,0 or something like 2754,0 or 0,498
(dog.jpg is only 768x576).

Oddly, with a rtsp source, it looked like some of the outputs where ~correct (cars with roughly correct bounding boxes, but low scores).

sha1sums:
5818a72c88dfa0f5fd5b9c9c0c15a82fdb8dc62d yolov7-tiny.cfg
f3212b63af4764b67da6155f4950567f2183556b yolov7-tiny.weights
5120e1125cd7ba2684284fc0e547a7c38dcfc473 data/dog.jpg

@AlexeyAB
Copy link
Owner Author

@clemtaylor

darknet.exe detector test cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights -ext_output dog.jpg

image

image

@clemtaylor
Copy link

@AlexeyAB

Ah, thanks. The problem was there are two different versions of the yolov7-tiny.cfg, the one I was using (that I thought I downloaded with the weights) had an extra P5 section and some of the activation functions where linear and not logistic. The version from the git tree worked just fine.

@kemics
Copy link

kemics commented Jul 21, 2022

Hi @AlexeyAB , really great work on yolov7

Do you have plans on yolov7 darknet cfg file release? Currently only yolov7-tiny is released

@AdamCuellar
Copy link

@AlexeyAB

Are the weights you provided for yolov7-tiny here the same as the yolov7-tiny.pt weights from the pytorch repo? I've loaded both in pytorch, the .weights using the previous pytorch yolo repos and the .pt in the yolov7 repo and they seem to have different values.

@Ar-Ray-code
Copy link

@AlexeyAB

darknet-yolov7 is compatible and I could porte to work with darknet_ros smoothly.
Thanks ❗

darknet_ros : https://github.com/Ar-Ray-code/darknet_ros_fp16

@chjej202
Copy link

chjej202 commented Jul 27, 2022

Hi, @AlexeyAB

I tested your uploaded yolov7-tiny weight file on pytorch and darknet with COCO val 2017.
I got a quite different mAP result. (pytorch AP50: 0.528, darknet AP50: 0.494)
Do you know why this difference happened?

I used ScaledYolov4 repo to directly run the same yolov7-tiny darknet weight file on pytorch.

The followings are details of each result.
mAP result from pytorch (command: python test.py --img 416 --conf 0.001 --iou 0.65 --batch 32 --device 0 --cfg yolov7-tiny.cfg --weights yolov7-tiny.weights):
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.528
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.373
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.157
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.380
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.534
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.298
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.489
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.536
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.310
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.596
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.735

mAP result from darknet (command: ./darknet detector map cfg/coco_val.data cfg/yolov7-tiny.cfg yolov7-tiny.weights):
for conf_thresh = 0.25, precision = 0.63, recall = 0.47, F1-score = 0.54
for conf_thresh = 0.25, TP = 17221, FP = 9965, FN = 19114, average IoU = 52.36 %

IoU threshold = 50 %, used Area-Under-Curve for each unique Recall
mean average precision (mAP@0.50) = 0.494409, or 49.44 %

@AlexeyAB
Copy link
Owner Author

@chjej202 There are different ways for AP calculation (different ways for AUC calculation: continuous,interpolation, ...) and so on.

Try to use the same for both repos - or Pycocotool, or COCO-Codalab-server: https://github.com/AlexeyAB/darknet/wiki/How-to-evaluate-accuracy-and-speed-of-YOLOv4

In any case Pytorch could provide slightly better AP, because

  • Pytorch version resizes network to the image aspect ratio
  • Darknet doesn't resize network

@chjej202
Copy link

chjej202 commented Jul 27, 2022

@chjej202 There are different ways for AP calculation (different ways for AUC calculation: continuous,interpolation, ...) and so on.

Try to use the same for both repos - or Pycocotool, or COCO-Codalab-server: https://github.com/AlexeyAB/darknet/wiki/How-to-evaluate-accuracy-and-speed-of-YOLOv4

In any case Pytorch could provide slightly better AP, because

  • Pytorch version resizes network to the image aspect ratio
  • Darknet doesn't resize network

Hi @AlexeyAB

Thank you for your comment.
I uploaded the results to COCO-Codalab-server, and I got the following results.

pytorch version:
AP: 0.350
AP_50: 0.524
AP_75: 0.372
AP_small: 0.149
AP_medium: 0.379
AP_large: 0.534
AR_max_1: 0.296
AR_max_10: 0.476
AR_max_100: 0.509
AR_small: 0.265
AR_medium: 0.564
AR_large: 0.722

darknet version:
AP: 0.330
AP_50: 0.496
AP_75: 0.351
AP_small: 0.151
AP_medium: 0.353
AP_large: 0.498
AR_max_1: 0.290
AR_max_10: 0.472
AR_max_100: 0.505
AR_small: 0.281
AR_medium: 0.553
AR_large: 0.703

For AP_50 value, the gap between two results is just slightly reduced, but pytorch version is much better than darknet version (pytorch: 0.524, darknet: 0.496, gap: 0.034 => 0.028)

Is there major difference between darknet and pytorch related to pre-/post-processing?

@AlexeyAB
Copy link
Owner Author

@chjej202

Is there major difference between darknet and pytorch related to pre-/post-processing?

For Inference and Validation - Yes.

@AlexeyAB
Copy link
Owner Author

I added yolov7.cfg/weights and yolov7x.cfg/weights: #8595 (comment)

@VisionEp1
Copy link

VisionEp1 commented Aug 12, 2022

  • was clarified already by the posts above there are some expected differences

@toplinuxsir
Copy link

@AlexeyAB for yolov7x where is the .conv file for training ? Thanks!

@AlexeyAB
Copy link
Owner Author

@toplinuxsir I added it: #8595 (comment)

YOLOv7x:

cfg: https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov7x.cfg
weights: https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov7x.weights
weights for fine-tuning: https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov7x.conv.147

@toplinuxsir
Copy link

@AlexeyAB Thanks!

@xinsuinizhuan
Copy link

now daknet support the yolov7_tiny yolov7 and yolov7x train the customed dateset? how about the accuracy whit he pytorch version?

@bensonreed
Copy link

@AlexeyAB Thanks for the great work. I tried to train my own dataset using yolov7x, but something seems to be going wrong. The Loss curve first decreased and then increased, and the obtained mAP has a large fluctuation. What is the reason?
I modified the following parameters in the yolov7x.cfg file:
batch=64,subdivisions=32,max_batches = 6000,steps=4800, 5400
filters=18,classes=1 in 3 yolo layers
The training command line:
darknet.exe detector train data/Oil_stain.data cfg/yolov7x.cfg yolov7x.conv.147 -map
chart

@cuekoo
Copy link

cuekoo commented Aug 29, 2022

I tested yolo v7 tiny with darknet and the results were fine. However, when tested with opencv (version 4.5.3), more than 7k objects are detected. Did I miss some settings?

import cv2 as cv
import numpy as np

def main():

    className = "./model/coco.names"

    img = cv.imread("data/dog.jpg")
    rows, cols = img.shape[:2]
    cfg = "yolov7-tiny.cfg"
    weights = "yolov7-tiny.weights"
    net = cv.dnn_DetectionModel(cfg, weights)
    net.setInputSize(cols, rows)

    net.setInputScale(1.0 / 255)
    net.setInputSwapRB(True)
    with open(className, 'rt') as f:
            names = f.read().rstrip('\n').split('\n')

    classes, confidences, boxes = net.detect(img, confThreshold=0.25, nmsThreshold=0.4)
    print("object count: {}".format(len(boxes)))

Output:

object count: 7678

@LdDl LdDl mentioned this issue Aug 29, 2022
@ou525
Copy link

ou525 commented Aug 30, 2022

Thank you very much for sharing, can you provide a tool to convert the pytorch weights of the WongKinYiu/yolov7 into cfg, weight files

@opentld
Copy link

opentld commented Sep 6, 2022

Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

YOLOv7:

YOLOv7x:

YOLOv7-tiny-leaky-relu:

Darknet cfg/weights file - currently tested for inference only:

Test FPS:

  • without NMS: darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -benchmark
  • with NMS: darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -dont_show

YOLOv7 is more accurate and faster than YOLOv5 by 120% FPS, than YOLOX by 180% FPS, than Dual-Swin-T by 1200% FPS, than ConvNext by 550% FPS, than SWIN-L by 500% FPS, than PPYOLOE-X by 150% FPS.

YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100, batch=1.

  • YOLOv7-e6 (55.9% AP, 56 FPS V100 b=1) by +500% FPS faster than SWIN-L C-M-RCNN (53.9% AP, 9.2 FPS A100 b=1)
  • YOLOv7-e6 (55.9% AP, 56 FPS V100 b=1) by +550% FPS faster than ConvNeXt-XL C-M-RCNN (55.2% AP, 8.6 FPS A100 b=1)
  • YOLOv7-w6 (54.6% AP, 84 FPS V100 b=1) by +120% FPS faster than YOLOv5-X6-r6.1 (55.0% AP, 38 FPS V100 b=1)
  • YOLOv7-w6 (54.6% AP, 84 FPS V100 b=1) by +1200% FPS faster than Dual-Swin-T C-M-RCNN (53.6% AP, 6.5 FPS V100 b=1)
  • YOLOv7x (52.9% AP, 114 FPS V100 b=1) by +150% FPS faster than PPYOLOE-X (51.9% AP, 45 FPS V100 b=1)
  • YOLOv7 (51.2% AP, 161 FPS V100 b=1) by +180% FPS faster than YOLOX-X (51.1% AP, 58 FPS V100 b=1)

more5

image

yolov7_640_1280

I love this project, new bee!!!

@microboym
Copy link

Model inferernce time (with NMS)

darknet.exe detector demo cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -dont_show

image

@AlexeyAB What device was the result generated on?

@harikiran17
Copy link

Hey @AlexeyAB, I wanted to compare map scores between yolov4 and yolov7 on the coco2017 validation set. I calculated map for yolov4 using ./darknet detector map cfg/coco.data yolov4.cfg yolov4.weights where I downloaded the weights and the cfg file from model zoo and these are the results I got (I changed the width and height to 640 in the cfg file):
image
For yolov7 I ran with the pytorch repo using python test.py --data coco.yaml --img-size 640 --batch 32 --conf 0.25 --iou 0.5 --weights yolov7.pt --verbose and got the following results:
image

I see a huge difference in map scores between yolov4 and yolov7. Is this expected? Or am I doing something wrong?

@xiaoujun
Copy link

@cenit
Copy link
Collaborator

cenit commented Feb 23, 2023

they work for me.
Is the problem real for any other people?

@1027663760
Copy link

QQ:1027663760

@srasilla
Copy link

It is necessary to develop a tool to convert yaml to cfg both for traing and for inference. We are hoping for yaml2cfg. By C++ or by python

C++ please! ;)

@jacob-m-nash
Copy link

Hi @AlexeyAB,

I have a trained model I trained with a custom data set in Pytorch but now need it in Darkent format.

You mentioned

2. There is at least YOLOv7-tiny for Darknet, but it was trained using Pytorch and converted to the Darknet. I didn't test 

How easy is it to convert my weights into Darkent format? If so, how? Or would it be best to train again using Darknet?

Thanks,
Jacob

@vsaw
Copy link

vsaw commented Aug 1, 2023

I ran quick experiments to compare YOLOv4 and v7 performance. From what I can tell, v7 is slower and less accurate then v4. Anybody else see similar results?

@invo-mwiseman
Copy link

invo-mwiseman commented Aug 1, 2023

I ran quick experiments to compare YOLOv4 and v7 performance. From what I can tell, v7 is slower and less accurate then v4. Anybody else see similar results?

For me the performance is the same (or similar enough it's not noticeable), but the accuracy is notably worse.

I made early comparisons when it was released, and I was mocked for pointing out it was worse when tested in a real-world scenario. 🤣

Either way, don't bother with v5, v7 or v8. They all have the same problem.

@stephanecharette
Copy link
Collaborator

I ran quick experiments to compare YOLOv4 and v7 performance. From what I can tell, v7 is slower and less accurate then v4. Anybody else see similar results?

This is what I reported last year as well: https://www.youtube.com/watch?v=JSgDs0XXz8M

@EyGy
Copy link

EyGy commented Aug 2, 2023

I do agree with @vsaw @invo-mwiseman and @stephanecharette. Especially in industrial real-life use cases, I lost quite some time trying to upgrade to "newer" architectures. The performance of Scaled-yolov4 remains unbeaten. My best guess is that new versions like v7 are somewhat finetuned to benchmark datasets like COCO on an architectual level and not so well suited for general use-cases.

Would love to get some more scientific in depth insights into this, since those findings are counterintuitive....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests