-
-
Notifications
You must be signed in to change notification settings - Fork 16.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Augmentation on Test Set #2625
Comments
Augmentation does not generate new data, it simply views the existing data in different, random ways every time an image is used. Therefore an image is never seen twice the same way during training. Training augmentation settings are defined in a hyperparameter file. You can use hyperparameter evolution to optimize these values to your custom training requirements. Lines 22 to 33 in 9b92d3e
Test time augmentation is not related to training augmentation. TTA simply runs inference at multiple image sizes and left-right flips. See TTA and Evolution tutorials for additional details: YOLOv5 Tutorials |
Is there any way to optimize only these values during evolution? And also after evolution |
@DimaMirana the meta dictionary in train.py controls hyperparameter constraints which you can use to freeze parameters. Lines 543 to 572 in 9b92d3e
The command to run augmentation during training is: The hyp file used is defined here: Line 458 in 9b92d3e
|
I'm really sorry to disturb you but can you give me an example where you run augmentation during training with specific augmentation? Also if I just run |
@DimaMirana training applies all augmentation hyperparameters by default. The hyp file used is defined here (point to another or modify this file if you'd like): Line 458 in 9b92d3e
For a full list of training arguments and their defaults see train.py argparser: Lines 453 to 489 in ee16983
or simply run $ python train.py --help
usage: train.py [-h] [--weights WEIGHTS] [--cfg CFG] [--data DATA] [--hyp HYP] [--epochs EPOCHS] [--batch-size BATCH_SIZE] [--img-size IMG_SIZE [IMG_SIZE ...]] [--rect]
[--resume [RESUME]] [--nosave] [--notest] [--noautoanchor] [--evolve] [--bucket BUCKET] [--cache-images] [--image-weights] [--device DEVICE] [--multi-scale]
[--single-cls] [--adam] [--sync-bn] [--local_rank LOCAL_RANK] [--workers WORKERS] [--project PROJECT] [--entity ENTITY] [--name NAME] [--exist-ok] [--quad]
[--linear-lr] [--upload_dataset] [--bbox_interval BBOX_INTERVAL] [--save_period SAVE_PERIOD] [--artifact_alias ARTIFACT_ALIAS]
optional arguments:
-h, --help show this help message and exit
--weights WEIGHTS initial weights path
--cfg CFG model.yaml path
--data DATA data.yaml path
--hyp HYP hyperparameters path
--epochs EPOCHS
--batch-size BATCH_SIZE
total batch size for all GPUs
--img-size IMG_SIZE [IMG_SIZE ...]
[train, test] image sizes
--rect rectangular training
--resume [RESUME] resume most recent training
--nosave only save final checkpoint
--notest only test final epoch
--noautoanchor disable autoanchor check
--evolve evolve hyperparameters
--bucket BUCKET gsutil bucket
--cache-images cache images for faster training
--image-weights use weighted image selection for training
--device DEVICE cuda device, i.e. 0 or 0,1,2,3 or cpu
--multi-scale vary img-size +/- 50%
--single-cls train multi-class data as single-class
--adam use torch.optim.Adam() optimizer
--sync-bn use SyncBatchNorm, only available in DDP mode
--local_rank LOCAL_RANK
DDP parameter, do not modify
--workers WORKERS maximum number of dataloader workers
--project PROJECT save to project/name
--entity ENTITY W&B entity
--name NAME save to project/name
--exist-ok existing project/name ok, do not increment
--quad quad dataloader
--linear-lr linear LR
--upload_dataset Upload dataset as W&B artifact table
--bbox_interval BBOX_INTERVAL
Set bounding-box image logging interval for W&B
--save_period SAVE_PERIOD
Log model after every "save_period" epoch
--artifact_alias ARTIFACT_ALIAS
version of dataset artifact to be used
|
So if I want to run training on my custom data .yaml for 5 epoch with my finding hyperparameter after training then this will be the command? |
@DimaMirana see Hyperparameter Evolution Tutorial for complete details on evolution. Evolution and training are seperate. Evolution will return a set of hyps that you can then point to to train, i.e. YOLOv5 Tutorials
|
got it thanks. Also how to cite your project in papers? Where is the papaer? |
Also where can I see the overall Accuracy of the model of all classes. not the precision and recall. |
also any image about the yolov5 network architecture? |
@DimaMirana please see https://github.com/ultralytics/yolov5#citation for citations. You can use netron or tensorboard to visualize architecture, and also search issues for custom visualizations. |
how to visualize architecture using tensorboard? |
@DimaMirana in theory you should be able to uncomment L335 in train.py to visualize model with tensorboard, though we've had various various levels of success in practice. Lines 329 to 335 in 9ccfa85
|
Thanks for the info. Also where can I found the f1 score of each class after finishing training? the current metrrics gives the following. |
@DimaMirana F1 curves are computed and displayed in F1.jpg in your results directory, though it is not printed to screen. VOC example with YOLOv5x6: |
is there any way to get F1 score of each class? |
Also when using colab is there any way to save the weights and other necessary files so that i can resume it from previous training on the next day? |
It seems like the best way to get F1 at each class would be to add it to the printed metrics in test.py here. You should note though that each class also has a highest F1 at a different confidence. Lines 227 to 230 in 9ccfa85
|
Thanks. Also when using colab is there any way to save the weights and other necessary files so that i can resume it from previous training on the next day? |
@DimaMirana yes there's two tricks. You can set your --project directory to a mounted Google Drive, or you can also try the new W&B integration which should save your checkpoints to W&B now also. See https://wandb.ai/cayush/yolov5-dsviz-demo/reports/Object-Detection-with-YOLO-and-Weights-Biases--Vmlldzo0NTgzMjk |
Many thanks. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Add Augmentation on Test Set❔
I want to train my model using the
!python train.py --weights yolov5x.pt --data '../data.yaml' --epochs 5 --cache --img 416 --evolve
to generate the optimal hyperparameter value for augmentation and learning rate etc. Later I want to use that hyperparameter on the same model for my custom dataset. How do I add the augmentation parameter on train.py. In your tutorial on TTA(test time augmentation) just adding --augmentation is enough. Will it be same for the training set also? Also can I see total number of data generated after augmentation?
!python train.py --batch 16 --weights yolov5x.pt --data '../data.yaml' --epochs 5 --cache --img 416 --hyp '/runs/train/evolve/hyp_evolved.yaml'
will this be the command for adding augmentation on the training set?The text was updated successfully, but these errors were encountered: