-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run this code on custom data #8
Comments
Hi, the code has optional settings that you may not want or need. Each setting is easy to turn on or off using the input arguments to the scripts. To run on your own dataset, create your own script. For example, create a copy of
In summary, you would only need to change that one function call to |
@xu-ji For segmentation, I found for Postdam in line in data.py, it requires "unlabelled_train", "labelled_train", "labelled_test", but in the paper it said it's unsupervised method, this is confused to me, could you explain it? Also, I don't have the labeled training images, how can I generate the image pairs for training and test dataset on my customize segmentation dataset? Thanks. |
Images are taken from If you don't have labels, you will train the network but won't be able to quantitatively evaluate it. This means you would be skipping the call to eval. You don't need labels to generate pairs. You take your whole image, copy it and transform it to create the second image of the pair. The transforms used for Potsdam were jitter and flipping, done here. |
Thanks. It's clear to me now. As for the training and test images, for the "unlabelled_train", "labelled_train", "labelled_test", what is the ratio? For example, I have 4500 "unlabelled_train" images, and the "labelled_train" is the same with "labelled_test", but only have 500 images, is it ok? Also, did you try depth images only for segmentation? |
In line, for depth image only, do I need to using |
If you look at the supplementary material (in /paper) table 6 gives the dataset sizes. Unlabelled train + labelled train = 8550 images, labelled test = 5400 images (so labelled train = 5400, unlabelled train = 8550 - 5400). You should be fine with 500 labelled images. The amount of labels required to find the mapping is very low. We did not try anything with depth. Your input channels would be 1. You would not use sobel filtering at all, that is a transform for colour images. Because you are working with depth, you may want to consider different transforms to what we used. Jitter is also an operation intended for colour images. You may want to consider trying salt and pepper noise, and flipping (depending on what your images are about), as your transforms. |
Thanks. It's really helpful. |
@xu-ji Can your IIC method do instance segmentation? |
No. That would require some material addition to the method. |
what about model ind for our custom dataset? |
You could run the scripts on images of 256x256. If you are using your own dataset you will almost certainly need to change the code anyway, if you need to write your own dataloader. To use our existing architectures, the easiest way is just to resize your images to one of the compatible sizes. For example for segmentation our Potsdam images were 200x200. You can find these details by looking at the code or in the supplementary material. |
Hello, your work is very useful for me .Thanks a lot! |
If it's your own dataset, probably best to write your own script. It's quite simple, just load your saved network, run your data through the network, get prediction per pixel, map each prediction to a colour. There are some examples that I used at one point, the |
Thanks a lot for the awesome and useful work! I was wondering about couple of things:
to
similarly to what is shown here for Sqeezenet? |
|
Thanks for your great work! I check the code and find its hard coded for the benchmark given in the paper, whether could i run on my custom data? thanks a lot!
The text was updated successfully, but these errors were encountered: