-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 4, 202, 202] #12
Comments
This code was picked up in different projects available on GitHub (eg in RobustMFSRforEO) and I still faced the same error.
Plus : in DeepNetworks/ShiftNet.py I removed the |
Hi, have you tested your solution since then? I encountered the same error, I'm currently running the training process with your changes, but there's nothing to compare the results with, since there's no pretrained model, sadly |
Indeed, this repository does not contain pre-trained models. After a quick review, I see that this competing architecture also uses ShiftNet, with the same code for lanzcos_shift, this time with pre-trained models that can be used for comparison: https://github.com/rarefin/MISR-GRU I had tested my proposal, but the results were worse using the ShiftNet than without... And below the values reported in the publication... I had given up hoping for some interaction on this forum to help me move forward! On rereading my proposal, it seems obvious that the result of the shift cannot be applied twice, for x and y, to the same flattened matrix. ... Unfortunately the results are not improved, some confusion must be introduced somewhere!
|
I think perhaps I have solved it.In my option,he wants to use convolution in single dimension.So he use a list[0, k_x.shape[3] // 2] in an image.the answer is to use 2d conv not 1d conv.And you should solve an inplace operation in network,which caused BP of network failed. |
Can you show us how you solved it? Cause I did change it to 2d conv, but I got this issue: |
This is because of inplace operation in pytorch.I remembered it appeared in shiftnet.py, you can try to fix it.But I get a bad result,so I gave up. |
I want to use your code with the Proba-V dataset, but I'm facing the following error.
$ python src/train.py --config config/config.json 0%| | 0/261 [00:00<?, ?it/s] 0%| | 0/400 [00:00<?, ?it/s] Traceback (most recent call last): File "[...]/HighRes-net/src/train.py", line 308, in <module> main(config) File "[...]/HighRes-net/src/train.py", line 294, in main trainAndGetBestModel(fusion_model, regis_model, optimizer, dataloaders, baseline_cpsnrs, config) File "[...]/HighRes-net/src/train.py", line 180, in trainAndGetBestModel srs_shifted = apply_shifts(regis_model, srs, shifts, device)[:, 0] File "[...]/HighRes-net/src/train.py", line 61, in apply_shifts new_images = shiftNet.transform(thetas, images, device=device) File "[...]/HighRes-net/src/DeepNetworks/ShiftNet.py", line 96, in transform new_I = lanczos.lanczos_shift(img=I.transpose(0, 1), File "[...]/HighRes-net/src/lanczos.py", line 96, in lanczos_shift I_s = torch.conv1d(I_padded, RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 4, 202, 202]
Here are the different values or shapes which are passed in the conv1d function :
I_padded
input shape :torch.Size([1, 4, 202, 202])
k_y.shape[0]
andk_x.shape[0]
groups number :4
k_y
andk_x
weights shapes :torch.Size([4, 1, 7, 1])
(andtorch.Size([4, 1, 1, 7])
)[k_y.shape[2] // 2, 0]
and[0, k_x.shape[3] // 2]
padding values :[3, 0]
and[3, 0]
I used the default config.json, except for the following parameters.
But I receive similar errors keeping the default values.
I tried to squeeze the 1st dim of img, the 2nd of weights and to specify a simple int value for padding to avoid the different error messages, but all I finally had is this new RuntimeError.
'Given groups=4, weight of size [4, 7, 1], expected input[4, 202, 202] to have 28 channels, but got 202 channels instead'
Any clue to help me?
The text was updated successfully, but these errors were encountered: