Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BiLSTM] - View size is not compatible with input tensor's size and stride #19

Open
carlos-lima opened this issue Dec 15, 2022 · 1 comment

Comments

@carlos-lima
Copy link

Hi,

I hope you are doing well.

I was trying to use BiLSTM using UoC database, using the command line below:

python train.py --model_name BiLSTM1d --data_name UoC --data_dir ./Data/Mechanical-datasets --normlizetype mean-std --processing_type O_A --checkpoint_dir ./Benchmark/Benchmark_Results/Order_split/UoC/RNN_mean-std_augmentation

and I have bumped into this runtime error:

RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

I replaced the line 37 bilstm_out = bilstm_out.view(bilstm_out.size(0), -1) on files BiLSTM1d.py and BiLSTM2d.py for this one bilstm_out = bilstm_out.contiguous().view(bilstm_out.size(0), -1) adding contiguous() and it has worked fine to me.

I don't know if it is the right approach, but I hope it will help.

Thanks in advance,

Carlos Lima

@river2022ok
Copy link

contiguous

错误信息 RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. 指出你尝试使用 view 方法来改变张量的形状,但是由于输入张量的内存布局(stride)与你想要的新形状不兼容,因此操作失败了。view 方法要求输入张量在内存中是连续的,如果不连续,你需要使用 reshape 方法。

在 PyTorch 中,view 方法用于改变张量的形状,但是它要求原始张量在内存中是连续的。如果张量不连续,view 方法会失败。相反,reshape 方法可以处理不连续的张量,并返回一个具有新形状的张量,即使原始张量在内存中不是连续的。

解决方法
使用 reshape 替代 view:

修改代码,使用 reshape 而不是 view,因为 reshape 可以处理非连续的张量。
python
bilstm_out = bilstm_out.reshape(bilstm_out.size(0), -1)
确保张量在内存中是连续的:

如果你需要使用 view 方法,可以先调用 contiguous 方法来确保张量在内存中是连续的。
python
bilstm_out = bilstm_out.contiguous().view(bilstm_out.size(0), -1)
示例代码
假设你有一个双向长短期记忆网络(BiLSTM)的输出,你想要将其展平以便进行进一步的处理:

python
import torch
import torch.nn as nn

假设 bilstm_out 是 BiLSTM 的输出

bilstm_out = nn.functional.bilstm(input, hidden, cell, num_layers, batch_first=True)[0]

使用 reshape 来改变张量的形状

bilstm_out = bilstm_out.reshape(bilstm_out.size(0), -1)

或者先调用 contiguous 然后使用 view

bilstm_out = bilstm_out.contiguous().view(bilstm_out.size(0), -1)
在这个例子中,bilstm_out 是 BiLSTM 层的输出,我们想要将其展平以便进行下一步的处理。使用 reshape 方法可以确保即使原始张量在内存中不是连续的,也能够成功地改变形状。

总之,当你遇到这个错误时,应该检查你的张量是否在内存中是连续的,并根据需要选择使用 reshape 或 view 方法。通常,reshape 是更安全的选择,因为它不要求张量在内存中是连续的。

so,reshape was ok,too, I tested it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants