Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does yolov5 has the data augmentation of flip horizontally ? #1185

Closed
wwdok opened this issue Oct 21, 2020 · 19 comments
Closed

Does yolov5 has the data augmentation of flip horizontally ? #1185

wwdok opened this issue Oct 21, 2020 · 19 comments
Labels
question Further information is requested

Comments

@wwdok
Copy link
Contributor

wwdok commented Oct 21, 2020

❔Question

I see some basic data augmentations in train.py, but i don't see flip horizontally and mosaic and so on, so does yolov5 support other more data augmentations ?

@wwdok wwdok added the question Further information is requested label Oct 21, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Oct 21, 2020

Hello @wwdok, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

Augmentation hyperparameters are located here:

# Hyperparameters for COCO training from scratch
# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 0 # anchors per output grid (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)

@wwdok
Copy link
Contributor Author

wwdok commented Oct 21, 2020

@glenn-jocher Sorry, i find my repo is out of date, the newest repo already has fliplr hypeparameter ! Thanks !

@wwdok wwdok closed this as completed Oct 21, 2020
@glenn-jocher
Copy link
Member

@wwdok ah, you should git pull often, as repo changes nightly.

@wwdok
Copy link
Contributor Author

wwdok commented Oct 22, 2020

@wwdok ah, you should git pull often, as repo changes nightly.

Hi, bro, I find in the train.py of 3 months ago, there is mixed_precision, but now i find the latest train.py does not include mixed_precision, why was it cancelled ?
image

@glenn-jocher
Copy link
Member

@wwdok mixed precision is integrated by default now.

@wwdok
Copy link
Contributor Author

wwdok commented Oct 22, 2020

@glenn-jocher Great ! 😀

@LogicNg
Copy link

LogicNg commented Jul 7, 2021

So in every epochs the data will be augmented according to the probability?

@glenn-jocher
Copy link
Member

@LogicNg yes every mosaic.

@julian-douglas
Copy link

julian-douglas commented Aug 9, 2021

I have a question about the augmentations. I don't understand how the HSV augmentations are generated. For example, we have:
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)

What does this do? Is this changing the saturation to 70% of the original? Or is it ± 70%, so 30% and 170%? Or is it a 0.7 probability that the saturation will change? If so, how much will it change by? Will it both increase and decrease? Will it affect every image?

Also, for the image rotations, if I put 90, will that be clockwise or anticlockwise?

Thank you

@glenn-jocher
Copy link
Member

@julian-douglas augmentations are defined in your hyperparameter file and implemented in datasets.py. Albumentations may be added additionally, see Albumentations PR.

YOLOv5 augmentation

@RainbowSun11Q2H
Copy link

Thanks for your code.

Does the "degree" in hyperparameter files mean rotation augmentation? And what is the range of "degree"?

Thanks.

@glenn-jocher
Copy link
Member

glenn-jocher commented May 8, 2022

@RainbowSun11Q2H 👋 Hello! Thanks for asking about image augmentation. degree limits are +/- 180

YOLOv5 🚀 applies online imagespace and colorspace augmentations in the trainloader (but not the val_loader) to present a new and unique augmented Mosaic (original image + 3 random images) each time an image is loaded for training. Images are never presented twice in the same way.

YOLOv5 augmentation

Augmentation Hyperparameters

The hyperparameters used to define these augmentations are in your hyperparameter file (default data/hyp.scratch.yaml) defined when training:

python train.py --hyp hyp.scratch-low.yaml

lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

Augmentation Previews

You can view the effect of your augmentation policy in your train_batch*.jpg images once training starts. These images will be in your train logging directory, typically yolov5/runs/train/exp:

train_batch0.jpg shows train batch 0 mosaics and labels:

YOLOv5 Albumentations Integration

YOLOv5 🚀 is now fully integrated with Albumentations, a popular open-source image augmentation package. Now you can train the world's best Vision AI models even better with custom Albumentations 😃!

PR #3882 implements this integration, which will automatically apply Albumentations transforms during YOLOv5 training if albumentations>=1.0.3 is installed in your environment. See #3882 for full details.

Example train_batch0.jpg on COCO128 dataset with Blur, MedianBlur and ToGray. See the YOLOv5 Notebooks to reproduce: Open In Colab Open In Kaggle

Good luck 🍀 and let us know if you have any other questions!

@Bara-Elba
Copy link

Can I indicate how many images exactly to be generated.
I have a small dataset and I want to augment it *10, I searched in the code and didn't find where to do it.
is there a possibility to do so ?

@glenn-jocher
Copy link
Member

glenn-jocher commented May 11, 2022

@Bara-Elba 👋 Hello! Thanks for asking about image augmentation. YOLOv5 🚀 applies online imagespace and colorspace augmentations in the trainloader (but not the val_loader) to present a new and unique augmented Mosaic (original image + 3 random images) each time an image is loaded for training. Images are never presented twice in the same way.

YOLOv5 augmentation

Augmentation Hyperparameters

The hyperparameters used to define these augmentations are in your hyperparameter file (default data/hyp.scratch.yaml) defined when training:

python train.py --hyp hyp.scratch-low.yaml

lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

Augmentation Previews

You can view the effect of your augmentation policy in your train_batch*.jpg images once training starts. These images will be in your train logging directory, typically yolov5/runs/train/exp:

train_batch0.jpg shows train batch 0 mosaics and labels:

YOLOv5 Albumentations Integration

YOLOv5 🚀 is now fully integrated with Albumentations, a popular open-source image augmentation package. Now you can train the world's best Vision AI models even better with custom Albumentations 😃!

PR #3882 implements this integration, which will automatically apply Albumentations transforms during YOLOv5 training if albumentations>=1.0.3 is installed in your environment. See #3882 for full details.

Example train_batch0.jpg on COCO128 dataset with Blur, MedianBlur and ToGray. See the YOLOv5 Notebooks to reproduce: Open In Colab Open In Kaggle

Good luck 🍀 and let us know if you have any other questions!

@Titya79
Copy link

Titya79 commented Feb 4, 2025

Augmentation hyperparameters are located here:

yolov5/data/hyp.scratch.yaml

Lines 1 to 33 in 83deec1

Hyperparameters for COCO training from scratch

python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300

See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold

anchors: 0 # anchors per output grid (0 to ignore)

fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)

I try many time on this, but the result of args.yaml shown not correct with the modification.

  1. Modify the data.yaml file for calling train.
    model = YOLO("yolov5nu.pt") # Load a pretrained model (recommended for training)
    yaml = "data_aug.yaml"
    results = model.train(data=yaml, epochs=1, imgsz=640)

data_aug.yml file:

Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license

Train/val/test directory

path: C:/srt/raw_data/datasets_resized2 # dataset root dir
train: images/train # train images
val: images/val # val images
test: # test images (optional)

Classes

names:
0: Traffic Sign

this parameters are all zero since we want to use albumentation framework

fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0 # image HSV-Hue augmentation (fraction)
hsv_s: 0 # image HSV-Saturation augmentation (fraction)
hsv_v: 0 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0 # image translation (+/- fraction)
scale: 0 # image scale (+/- gain)
shear: 0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.0 # image flip left-right (probability)
mosaic: 0.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

  1. Modify the base [yolov5/data/hyps/hyp.scratch-low.yaml] . and calling the train:
    model = YOLO("yolov5nu.pt")
    yaml = "data.yaml" (no augment set)
    results = model.train(data=yaml, epochs=1, imgsz=640)

Could u tell me the correct way to modify the parameters in yaml file? instead of set in the train code. Since if I set in the train, it works (results = model.train(data=yaml, epochs=1, imgsz=640, fliplr=1)
Is the modify for yolo5 and yolo8 the same?

@pderrenger
Copy link
Member

For YOLOv5 augmentation parameter modifications, create a custom hyp.yaml file (copy from hyp.scratch.yaml) with your desired values, then train with --hyp your_hyp.yaml. YOLOv5 and YOLOv8 handle hyperparameters differently - for YOLOv5 use the hyp.yaml file, while YOLOv8 supports direct argument passing in train(). Ensure you're using the latest YOLOv5 code with git pull.

@Titya79
Copy link

Titya79 commented Feb 7, 2025

Thank you so much...

@pderrenger
Copy link
Member

You're welcome! We're glad you find YOLOv5 useful. All credit goes to our amazing open-source community and contributors. Let us know if you have any specific questions as you explore the project further! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

8 participants