diff --git a/README.md b/README.md
index fa0645d4fd2c..59abd084572c 100644
--- a/README.md
+++ b/README.md
@@ -62,15 +62,14 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on tr
Install
-[**Python>=3.6.0**](https://www.python.org/) is required with all
-[requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including
-[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
-
+Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a
+[**Python>=3.6.0**](https://www.python.org/) environment, including
+[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
```bash
-$ git clone https://github.com/ultralytics/yolov5
-$ cd yolov5
-$ pip install -r requirements.txt
+git clone https://github.com/ultralytics/yolov5 # clone
+cd yolov5
+pip install -r requirements.txt # install
```
@@ -78,8 +77,9 @@ $ pip install -r requirements.txt
Inference
-Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
-from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
+Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)
+. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest
+YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
```python
import torch
@@ -104,17 +104,17 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
Inference with detect.py
-`detect.py` runs inference on a variety of sources, downloading models automatically from
-the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
+`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from
+the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash
-$ python detect.py --source 0 # webcam
- img.jpg # image
- vid.mp4 # video
- path/ # directory
- path/*.jpg # glob
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
+python detect.py --source 0 # webcam
+ img.jpg # image
+ vid.mp4 # video
+ path/ # directory
+ path/*.jpg # glob
+ 'https://youtu.be/Zgi9g1ksQHc' # YouTube
+ 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
@@ -122,16 +122,20 @@ $ python detect.py --source 0 # webcam
Training
-Run commands below to reproduce results
-on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on
-first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the
-largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
+The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)
+results. [Models](https://github.com/ultralytics/yolov5/tree/master/models)
+and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest
+YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are
+1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the
+largest `--batch-size` possible, or pass `--batch-size -1` for
+YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
```bash
-$ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
- yolov5m 40
- yolov5l 24
- yolov5x 16
+python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 128
+ yolov5s 64
+ yolov5m 40
+ yolov5l 24
+ yolov5x 16
```
@@ -225,6 +229,7 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
### Pretrained Checkpoints
[assets]: https://github.com/ultralytics/yolov5/releases
+
[TTA]: https://github.com/ultralytics/yolov5/issues/303
|Model |size
(pixels) |mAPval
0.5:0.95 |mAPval
0.5 |Speed
CPU b1
(ms) |Speed
V100 b1
(ms) |Speed
V100 b32
(ms) |params
(M) |FLOPs
@640 (B)
@@ -257,7 +262,6 @@ We love your input! We want to make contributing to YOLOv5 as easy and transpare
-
## Contact
For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business inquiries or
diff --git a/detect.py b/detect.py
index e6e74ea7dfeb..1393f79746f6 100644
--- a/detect.py
+++ b/detect.py
@@ -2,14 +2,26 @@
"""
Run inference on images, videos, directories, streams, etc.
-Usage:
- $ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
- img.jpg # image
- vid.mp4 # video
- path/ # directory
- path/*.jpg # glob
+Usage - sources:
+ $ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
+ img.jpg # image
+ vid.mp4 # video
+ path/ # directory
+ path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
+
+Usage - formats:
+ $ python path/to/detect.py --weights yolov5s.pt # PyTorch
+ yolov5s.torchscript # TorchScript
+ yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
+ yolov5s.mlmodel # CoreML (under development)
+ yolov5s_openvino_model # OpenVINO (under development)
+ yolov5s_saved_model # TensorFlow SavedModel
+ yolov5s.pb # TensorFlow protobuf
+ yolov5s.tflite # TensorFlow Lite
+ yolov5s_edgetpu.tflite # TensorFlow Edge TPU
+ yolov5s.engine # TensorRT
"""
import argparse
diff --git a/export.py b/export.py
index a0758010e816..67e32305ded1 100644
--- a/export.py
+++ b/export.py
@@ -2,18 +2,19 @@
"""
Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
-Format | Example | `--include ...` argument
---- | --- | ---
-PyTorch | yolov5s.pt | -
-TorchScript | yolov5s.torchscript | `torchscript`
-ONNX | yolov5s.onnx | `onnx`
-CoreML | yolov5s.mlmodel | `coreml`
-OpenVINO | yolov5s_openvino_model/ | `openvino`
-TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model`
-TensorFlow GraphDef | yolov5s.pb | `pb`
-TensorFlow Lite | yolov5s.tflite | `tflite`
-TensorFlow.js | yolov5s_web_model/ | `tfjs`
-TensorRT | yolov5s.engine | `engine`
+Format | Example | `--include ...` argument
+--- | --- | ---
+PyTorch | yolov5s.pt | -
+TorchScript | yolov5s.torchscript | `torchscript`
+ONNX | yolov5s.onnx | `onnx`
+CoreML | yolov5s.mlmodel | `coreml`
+OpenVINO | yolov5s_openvino_model/ | `openvino`
+TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model`
+TensorFlow GraphDef | yolov5s.pb | `pb`
+TensorFlow Lite | yolov5s.tflite | `tflite`
+TensorFlow Edge TPU | yolov5s_edgetpu.tflite | `edgetpu`
+TensorFlow.js | yolov5s_web_model/ | `tfjs`
+TensorRT | yolov5s.engine | `engine`
Usage:
$ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml openvino saved_model tflite tfjs
@@ -27,6 +28,7 @@
yolov5s_saved_model
yolov5s.pb
yolov5s.tflite
+ yolov5s_edgetpu.tflite
yolov5s.engine
TensorFlow.js:
diff --git a/train.py b/train.py
index 304c001b6547..bd2fb5898cb9 100644
--- a/train.py
+++ b/train.py
@@ -1,10 +1,17 @@
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
-Train a YOLOv5 model on a custom dataset
+Train a YOLOv5 model on a custom dataset.
+
+Models and datasets download automatically from the latest YOLOv5 release.
+Models: https://github.com/ultralytics/yolov5/tree/master/models
+Datasets: https://github.com/ultralytics/yolov5/tree/master/data
+Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
Usage:
- $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640
+ $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (RECOMMENDED)
+ $ python path/to/train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch
"""
+
import argparse
import math
import os
diff --git a/val.py b/val.py
index c1fcf61b468c..f7c9ef5e60d2 100644
--- a/val.py
+++ b/val.py
@@ -3,7 +3,19 @@
Validate a trained YOLOv5 model accuracy on a custom dataset
Usage:
- $ python path/to/val.py --data coco128.yaml --weights yolov5s.pt --img 640
+ $ python path/to/val.py --weights yolov5s.pt --data coco128.yaml --img 640
+
+Usage - formats:
+ $ python path/to/val.py --weights yolov5s.pt # PyTorch
+ yolov5s.torchscript # TorchScript
+ yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
+ yolov5s.mlmodel # CoreML (under development)
+ yolov5s_openvino_model # OpenVINO (under development)
+ yolov5s_saved_model # TensorFlow SavedModel
+ yolov5s.pb # TensorFlow protobuf
+ yolov5s.tflite # TensorFlow Lite
+ yolov5s_edgetpu.tflite # TensorFlow Edge TPU
+ yolov5s.engine # TensorRT
"""
import argparse