This is a forked repository from main repo OID4 edited and updated to support the yolo label formats
- 1- Supplyed an optional argument
--yoloLabelStyle
to enable saving the downloaded labels into yolo format - 2- Editied the download directory structure to be more organised
- 4 . Saving the configuration / args of the dataset as a json file with the data set directory to use it later with the visualizer
- 3- Updated the visulaiser to work with new directory structure and support yolo label format
Do you want to build your personal object detector but you don't have enough images to train your model? Do you want to train your personal image classifier, but you are tired of the deadly slowness of ImageNet? Have you already discovered Open Images Dataset v6 that has 600 classes and more than 1,700,000 images with related bounding boxes ready to use? Do you want to exploit it for your projects but you don't want to download gigabytes and gigabytes of data!?
With this repository we can help you to get the best of this dataset with less effort as possible. In particular, with this practical ToolKit written in Python3 we give you, for both object detection and image classification tasks, the following options:
(2.0) Object Detection
- download any of the 600 classes of the dataset individually, taking care of creating the related bounding boxes for each downloaded image
- download multiple classes at the same time creating separated folder and bounding boxes for each of them
- download multiple classes and creating a common folder for all of them with a unique annotation file of each image
- download a single class or multiple classes with the desired attributes
- use the practical visualizer to inspect the donwloaded classes
(3.0) Image Classification
- download any of the 19,794 classes in a common labeled folder
- exploit tens of possible commands to select only the desired images (ex. like only test images)
The code is quite documented and designed to be easy to extend and improve. Me and Angelo are pleased if our little bit of code can help you with your project and research. Enjoy ;)
All the information related to this huge dataset can be found here. In these few lines are simply summarized some statistics and important tips.
Object Detection
Train | Validation | Test | #Classes | |
Images | 1,743,042 | 41,620 | 125,436 | - |
Boxes | 14,610,229 | 303,980 | 937,327 | 600 |
Image Classification
Train | Validation | Test | #Classes | |
Images | 9,011,219 | 41,620 | 125,436 | - |
Machine-Generated Labels | 78,977,695 | 512,093 | 1,545,835 | 7,870 |
Human-Verified Labels | 27,894,289 | 551,390 | 1,667,399 | 19,794 |
As it's possible to observe from the previous table we can have access to images from free different groups: train, validation and test. The ToolKit provides a way to select only a specific group where to search. Regarding object detection, it's important to underline that some annotations has been done as a group. It means that a single bounding box groups more than one istance. As mentioned by the creator of the dataset:
- IsGroupOf: Indicates that the box spans a group of objects (e.g., a bed of flowers or a crowd of people). We asked annotators to use this tag for cases with more than 5 instances which are heavily occluding each other and are physically touching. That's again an option of the ToolKit that can be used to only grasp the desired images.
Finally, it's interesting to notice that not all annotations has been produced by humans, but the creator also exploited an enhanced version of the method shown here reported 1
Python3 is required.
- Clone this repository
git clone https://github.com/EscVM/OIDv4_ToolKit.git
- Install the required packages
pip3 install -r requirements.txt
Peek inside the requirements file if you have everything already installed. Most of the dependencies are common libraries.
First of all, if you simply want a quick reminder of al the possible options given by the script, you can simply launch, from your console of choice, the main.py. Remember to point always at the main directory of the project
python3 main.py
or in the following way to get more information
python3 main.py -h
The ToolKit permit the download of your dataset in the folder you want (Dataset
as default). The folder can be imposed with the argument
--Dataset
so you can make different dataset with different options inside.
As previously mentioned, there are different available options that can be exploited. Let's see some of them.
Firstly, the ToolKit can be used to download classes in separated folders. The argument --classes
accepts a list of classes or
the path to the file.txt (--classes path/to/file.txt
) that contains the list of all classes one for each lines (classes.txt uploaded as example).
Note: for classes that are composed by different
words please use the _
character instead of the space (only for the inline use of the argument --classes
).
Example: Polar_bear
.
Let's for example download Apples and Oranges from the validation set. In this case we have to use the following command.
python3 main.py downloader --classes Apple Orange --type_csv validation
The algorith will take care to download all the necessary files and build the directory structure like this:
main_folder
│ main.py
│
└───OID
│ file011.txt
│ file012.txt
│
└───csv_folder
| │ class-descriptions-boxable.csv
| │ validation-annotations-bbox.csv
|
└───Dataset
|__config.jso
|
└─── test
|
└─── train
|
└─── validation
|
|___Apple
| |
| └───images
| | |
| | |0fdea8a716155a8e.jpg
| | |2fe4f21e409f0a56.jpg
| | |...
| |
| └───labels
| |
| |0fdea8a716155a8e.txt
| |2fe4f21e409f0a56.txt
| |...
|
└───Orange
|
|___images
| |
| |0b6f22bf3b586889.jpg
| |0baea327f06f8afb.jpg
| |...
|
└───labels
|
|0b6f22bf3b586889.txt
|0baea327f06f8afb.txt
|...
the directory structure like this in case of multiclasses = 1 is specified in arguments:
main_folder
│ main.py
│
└───OID
│ file011.txt
│ file012.txt
│
└───csv_folder
| │ class-descriptions-boxable.csv
| │ validation-annotations-bbox.csv
|
└───Dataset
|__config.jso
|
└─── test
|
└─── train
|
└─── validation
|
|___Apple_Orange
|
|0fdea8a716155a8e.jpg
|0fdea8a716155a8e.txt
|2fe4f21e409f0a56.jpg
|2fe4f21e409f0a56.txt
|0b6f22bf3b586889.jpg
|0b6f22bf3b586889.txt
|0baea327f06f8afb.jpg
|0baea327f06f8afb.txt
|...
If you have already downloaded the different csv files you can simply put them in the `csv_folder`. The script takes automatically care of the download of these files, but if you want to manually download them for whatever reason [here](https://storage.googleapis.com/openimages/web/download.html) you can find them.
If you interupt the downloading script `ctrl+d` you can always restart it from the last image downloaded.
## 2.2 Download multiple classes in a common folder
This option allows to download more classes, but in a common folder. Also the related notations are mixed together with
the already explained format (the first element is always the name of the single class). In this way, with a simple
dictionary it's easy to parse the generated label to get the desired format.
Again if we want to download Apple and Oranges, but in a common folder
```bash
python3 main.py downloader --classes Apple Orange --type_csv validation --multiclasses 1
In the original dataset the coordinates of the bounding boxes are made in the following way:
XMin, XMax, YMin, YMax: coordinates of the box, in normalized image coordinates. XMin is in [0,1], where 0 is the leftmost pixel, and 1 is the rightmost pixel in the image. Y coordinates go from the top pixel (0) to the bottom pixel (1).
However, in order to accomodate a more intuitive representation and give the maximum flexibility, every .txt
annotation is made like:
name_of_the_class left top right bottom
where each coordinate is denormalized. So, the four different values correspond to the actual number of pixels of the related image.
you have specifiy in argument --yoloLableStyle
that will save labels in yolo format which is:
class_index x_mid y_mid width height
where
- class_index : the index of the class label in the given classes list
- x_mid, y_mid : is the center of the bounding box normalised to the image width and height to be in range (0,1)
- width, height: the width and heigh of the bounding box normalised regarding to the image width and height to be in range (0,1)
If you don't need the labels creation use --noLabels
.
The annotations of the dataset has been marked with a bunch of boolean values. This attributes are reported below:
- IsOccluded: Indicates that the object is occluded by another object in the image.
- IsTruncated: Indicates that the object extends beyond the boundary of the image.
- IsGroupOf: Indicates that the box spans a group of objects (e.g., a bed of flowers or a crowd of people). We asked annotators to use this tag for cases with more than 5 instances which are heavily occluding each other and are physically touching.
- IsDepiction: Indicates that the object is a depiction (e.g., a cartoon or drawing of the object, not a real physical instance).
- IsInside: Indicates a picture taken from the inside of the object (e.g., a car interior or inside of a building).
- n_threads: Select how many threads you want to use. The ToolKit will take care for you to download multiple images in parallel, considerably speeding up the downloading process.
- limit: Limit the number of images being downloaded. Useful if you want to restrict the size of your dataset.
- y: Answer yes when have to download missing csv files.
- yoloLabelStyle : download labels and save it in yolo format as it showen above
Naturally, the ToolKit provides the same options as paramenters in order to filter the downloaded images. For example, with:
python3 main.py downloader -y --classes Apple Orange --type_csv validation --image_IsGroupOf 0
only images without group annotations are downloaded.
The Toolkit is now able to acess also to the huge dataset without bounding boxes. This dataset is formed by 19,995 classes and it's already divided into train, validation and test. The command used for the download from this dataset is downloader_ill
(Downloader of Image-Level Labels) and requires the argument --sub
. This argument selects the sub-dataset between human-verified labels h
(5,655,108 images) and machine-generated labels m
(8,853,429 images). An example of command is:
python3 main.py downloader_ill --sub m --classes Orange --type_csv train --limit 30
The previously explained commands Dataset
, multiclasses
, n_threads
and limit
are available.
The Toolkit automatically will put the dataset and the csv folder in specific folders that are renamed with a _nl
at the end.
downloader | visualizer | downloader_ill | ||
---|---|---|---|---|
Dataset | O | O | O | Dataset folder name |
classes | R | R | Considered classes | |
type_csv | R | R | Train, test or validation dataset | |
y | O | O | Answer yes when downloading missing csv files | |
multiclasses | O | O | Download classes toghether | |
noLabels | O | Don't create labels | ||
Image_IsOccluded | O | Consider or not this filter | ||
Image_IsTruncated | O | Consider or not this filter | ||
Image_IsGroupOf | O | Consider or not this filter | ||
Image_IsDepiction | O | Consider or not this filter | ||
Image_IsInside | O | Consider or not this filter | ||
n_threads | O | O | Indicates the maximum threads number | |
limit | O | O | Max number of images to download | |
sub | R | Human-verified or Machine-generated images (h/m) | ||
yoloLabelStyle | O | Download labels and save it in yolo format |
R = required, O = optional
The ToolKit is useful also for visualize the downloaded images with the respective labels.
python3 main.py visualizer
In this way the default Dataset
folder will be pointed to search the images and labels automatically. To point
another folder it's possible to use --Dataset
optional argument.
python3 main.py visualizer --Dataset desired_folder
Then the system will ask you which folder to visualize (train, validation or test) and the desired class.
Hence with d
(next), a
(previous) and q
(exit) you will be able to explore all the images. Follow the menu for all the other options.
- Denis Zuenko has added multithreading to the ToolKit and is currently working on the generalization and speeding up process of the labels creation
- Skylion007 has improved labels creation reducing the runtime from O(nm) to O(n). That massively speeds up label generation
- Alex March has added the limit option to the ToolKit in order to download only a maximum number of images of a certain class
- Michael Baroody has fixed the toolkit's visualizer for multiword classes
Use this bibtex if you want to cite this repository:
@misc{OIDv4_ToolKit,
title={Toolkit to download and visualize single or multiple classes from the huge Open Images v4 dataset},
author={Vittorio, Angelo},
year={2018},
publisher={Github},
journal={GitHub repository},
howpublished={\url{https://github.com/EscVM/OIDv4_ToolKit}},
}
"We don't need no bounding-boxes: Training object class detectors using only human verification"Papadopolous et al., CVPR 2016.