Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][Pytorch] add operator copy_ support #6049

Closed
wants to merge 1 commit into from
Closed

[WIP][Pytorch] add operator copy_ support #6049

wants to merge 1 commit into from

Conversation

huajsj
Copy link
Contributor

@huajsj huajsj commented Jul 14, 2020

Issue:
Using tvm compile a pytorch network, tvm failed due to copy_ operator not support.

solution:
add pytorch copy_ operator support to tvm.

@liangfu
Copy link
Member

liangfu commented Jul 14, 2020

@masahi would you please take a look?

@masahi
Copy link
Member

masahi commented Jul 14, 2020

@huajsj please add a test. Also, _ suffix in copy_ likely means it is an in-place op, so it's better to add support for non in-place version aten::copy as well (the same conversion as copy_).

Copy link
Contributor

@t-vi t-vi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to the missing test.

  • Could you explain your use case a bit? In particular, what PyTorch constructs map to copy_ in the graph and for which version.
  • One internal use of copy_ in PyTorch is as the implementation of .to and related shortcuts, but at least recent versions of PyTorch seem to export the proper commands for .to, .double and .cuda (with .double mapping to some less-than-official _cast_Double op).
  • The other main use of copy_ I'm aware of is for assigning tensors to slices and other views (i.e. t[:1] = s but also t.diag().copy_(foo) to set the diagonal), but clone would not help here. I'm not sure that TVM will handle view semantics.
  • I don't think mapping to clone captures the semantics of copy_(a, b) even in absence of views: After copying, the a has the values of b but keeps device and dtype of a and it also returns a.

I don't think we have the analysis to support inplace properly (and note that PyTorch has some un-implacing pass for the easy cases, but I don't know if we can use it / are using it).

@masahi I don't there is a copy (it's clone()) and it would not help in the cases above.

@huajsj
Copy link
Contributor Author

huajsj commented Jul 14, 2020

Hi @t-vi @masahi , @liangfu

Thanks for the review, following are my comments

Regards
Hua

#1 about use case and why copy_ , from my understanding copy_ is not only a shortcut of .to, the biggest difference between copy_ and .clone , .to is that the in place variable would keep it's stride&storage and size, for example for following use case, .clone and .to would cause b lost it's stride/storage information, running environment is pytorch 1.5.
a = torch.from_numpy(np.array((11.0,12.0, 13.0)).astype('float32'))
a = a.expand(2)
b = torch.from_numpy(np.array((1.0, 2.0,3.0)).astype('float32'))
b = b.repeat(2)
b.copy_(a)
print(b.stride())
b = torch.clone(a)
print(b.stride())
b = a.to("cpu")
print(b.stride())

#2. about use case t.diag().copy_(a) , seems like currently pytorch front end not support diag operator, we may can not
test it with copy_

#3 about the copy operator, just like @t-vi mentioned, clone is the non in place one.

#4. i agree with that clone is not capture the semantics of copy_ and find some problem during testing, would set this
PR into WIP and push the fix and test case later.

@huajsj huajsj changed the title [Pytorch] add operator copy_ support [WIP][Pytorch] add operator copy_ support Jul 14, 2020
@tqchen tqchen added the status: need update need update based on feedbacks label Jul 25, 2020
@huajsj
Copy link
Contributor Author

huajsj commented Jul 27, 2020

close this PR now, would create a new PR once find better solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants