Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"view size is not compatible with input tensor's size and stride" #1

Closed
cw-tan opened this issue Jan 14, 2022 · 3 comments
Closed

"view size is not compatible with input tensor's size and stride" #1

cw-tan opened this issue Jan 14, 2022 · 3 comments

Comments

@cw-tan
Copy link

cw-tan commented Jan 14, 2022

Hi, thanks for the great LBFGS optimizer. This issue is to bring to your attention an error that appeared for my use case.
I was using the LBFGS optimizer as a general purpose optimizer, with it being initialized as
optimizer = LBFGSNew([tensor_1, tensor_2], lr=lr, history_size=6, max_iter=4, line_search_fn=True) for example.

I got the following error during an optimization step:
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

This error did not appear when I did optimizations with respect to single tensors, for example:
optimizer = LBFGSNew([tensor_1], lr=lr, history_size=6, max_iter=4, line_search_fn=True)

Based on this Pytorch issue, I'm guessing that the way that the separate tensors are (combined?) and stored in memory is not contiguous.

I managed to avoid the error by changing all the lines that used .view(...) to .reshape(...) or .contiguous().view(...), with these lines being lines 87, 89 and 108:

view = p.grad.data.to_dense().view(-1)

view = p.grad.data.view(-1)

new_param1=p.data.clone().view(-1)

My optimization seemed to work out well after this change (I opted for the .contiguous().view(...) change), but I just wanted to check if these changes won't create other problems as I'm not sure about the motivation for using .view(...) over .reshape(...) in the first place. Thank you!

@SarodYatawatta
Copy link
Collaborator

Thanks, I will check if the fix is OK, and merge it.

SarodYatawatta added a commit that referenced this issue Jan 16, 2022
@SarodYatawatta
Copy link
Collaborator

Fixed. thanks

@cw-tan
Copy link
Author

cw-tan commented Jan 16, 2022

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants