Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Sizes of tensors must match except in dimension 0. Got 0 and 300 (The offending index is 0) #65

Open
Hezey opened this issue Oct 8, 2023 · 4 comments

Comments

@Hezey
Copy link

Hezey commented Oct 8, 2023

When I was training my own data set with DN-DETR, I encountered the following problems,
how do I solve them?
Who can tell me, please?
@HaoZhang534 @SangbumChoi @FengLi-ust @LYMDLUT

Traceback (most recent call last):
File "D:\JetBrains\Pycharm_Project\Learn_Deep_Learning\DN-DETR\main.py", line 443, in
main(args)
File "D:\JetBrains\Pycharm_Project\Learn_Deep_Learning\DN-DETR\main.py", line 369, in main
train_stats = train_one_epoch(
File "D:\JetBrains\Pycharm_Project\Learn_Deep_Learning\DN-DETR\engine.py", line 53, in train_one_epoch
outputs, mask_dict = model(samples, dn_args=dn_args)
File "D:\Anaconda3\envs\DN_DETR\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\JetBrains\Pycharm_Project\Learn_Deep_Learning\DN-DETR\models\dn_dab_deformable_detr\dab_deformable_detr.py", line 206, in forward
prepare_for_dn(dn_args, tgt_all_embed, refanchor, src.size(0), self.training, self.num_queries, self.num_classes,
File "D:\JetBrains\Pycharm_Project\Learn_Deep_Learning\DN-DETR\models\dn_dab_deformable_detr\dn_components.py", line 70, in prepare_for_dn
tgt = torch.cat([tgt_weight, indicator0], dim=1) + label_enc.weight[0][0]*torch.tensor(0).cuda()
RuntimeError: Sizes of tensors must match except in dimension 0. Got 0 and 300 (The offending index is 0)

@SangbumChoi
Copy link
Contributor

I think it is the problem either your dataset wasn't load properly or lack of defining num_classes.

@Hezey
Copy link
Author

Hezey commented Oct 8, 2023

I think it is the problem either your dataset wasn't load properly or lack of defining num_classes.

I've already set num_classes.
There are some errors in torch.cat. How can I fix the code to solve the problem.

@SangbumChoi
Copy link
Contributor

SangbumChoi commented Oct 8, 2023

try to print the individual shape of each variable and debug it.
Like i just said it seems to appear that tgt_weight is abnormal shape which is related to dataset preparation.
Or did you modified self.two_stage and self.use_dab constraint?

@Hezey
Copy link
Author

Hezey commented Oct 8, 2023

I didn't modified self.two_stage and self.use_dab.
I'm thinking how can I modify and add a few lines of code to solve the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants