Skip to content
This repository has been archived by the owner on Dec 18, 2024. It is now read-only.

Difference between the settings for demo and those in your paper #13

Closed
whiteking64 opened this issue Mar 8, 2022 · 6 comments
Closed

Comments

@whiteking64
Copy link

Hi,
I have a question on the difference of settings between the demo in your README and the experiment in your paper.

In the README, you published the pre-trained weight for demo.
It says while training the backbones for both image and text are ViT-L/16.
The section 5.1 in your paper says

We used LSeg with DPT and a smaller ViT-B/32 backbone together with the CLIP ViT-B/32 text encoder ...

When reproducing your results in 5.1, does that require a full-scratch training with ViT-B/32 backbone for the images?
Also, are there any other differences, such as batch size? More specifically, How do I change the arguments in train.sh ?

Finally, is it possible to share with us (or me) the weight used for your results?

Thank you in advance.

@Boyiliee
Copy link
Collaborator

Hi @whiteking64 ,

Thanks for your interest in LSeg!

  1. Currently, we provide the demo model for users to play around with it using any label set with random length and order.
  2. For all the ablation study such as the results in 5.1, we train LSeg with DPT and a smaller ViT-B/32 backbone together with the CLIP ViT-B/32 text encoder on ADE20k dataset. Therefore, you can follow the training and testing instruction in README. The primary thing to change is to set --backbone clip_vitb32_384, you can check details via this link.
  3. For these experiments, for the demo model, we use batch size = 8, it is the same with the train.sh , we use 8 gpus, for each GPU, we load one batch.
  4. Yes, we will share these weights. Please allow some time for me to sort them out (as well as the code). Should be in the next few months.
  5. For more details about the argument, please check here.

Hope this helps. Best wishes for your research!

@whiteking64
Copy link
Author

@Boyiliee

Thank you for your reply.
So, the command to reproduce your results in section 5.1 without any blocks (block depth = 0) is

python train_lseg.py \
        --dataset ade20k \
        --data_path ../datasets \
        --batch_size 8 \
        --exp_name lseg_ade20k_l32 \
        --base_lr 0.004 \
        --weight_decay 1e-4 \
        --no-scaleinv \
        --max_epochs 240 \
        --widehead \
        --accumulate_grad_batches 2 \
        --backbone clip_vitb32_384 \
        --num_features=512

I changed the backbone and num_features (which is 256 by default and in Table 5 you used 512 on the first row, and it is the same result as Table 4 with depth 0)

Am I correct?

Also, I just ran experimentally with batch size = 1 with 1 GPU. It took about 17 hours for one epoch. With 8 GPUs, I assume it is reduced to 2.1 hours, but 240 epochs would tale days to complete. Was it so on your experiments?

@Boyiliee
Copy link
Collaborator

Hi @whiteking64 ,

In this case, when you are using smaller backbone, you could increase the --batch_size as 4 or larger based on your GPU memory. I set it as 4 in my experiments, you could change it based on your GPU memory. As has been mentioned in the paper, we primarily use Quadro RTX 6000 or V100 for experiments, so it seems faster than your speed. In addition, we primarily follow the training principle of DPT. You could also refer to it for more details.

Hope this helps!

@soskek
Copy link

soskek commented Apr 12, 2022

Is Model for demo of this repo the same as the model in Section 5.2?

スクリーンショット 2022-04-12 10 49 23

@Boyiliee
Copy link
Collaborator

Yes!

@soskek
Copy link

soskek commented Apr 12, 2022

Thank you again!!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants