Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Returns float from complex angle #36896

Closed
wants to merge 3 commits into from
Closed

Conversation

mruberry
Copy link
Collaborator

Updates angle to return a float tensor, by default, when given complex inputs. This behavior is compatible with Python, NumPy, and C++. The implementation follows the former implementation for complex abs, extracting the logic into a common function for both abs and angle.

The test for complex abs's behavior in test_type_promotion.py is updated to also test the behavior of complex angle by comparing its results to NumPy's.

@mruberry mruberry requested a review from gchanan April 19, 2020 11:05
@mruberry mruberry requested a review from anjali411 April 19, 2020 11:05
@mruberry mruberry added module: complex Related to complex number support in PyTorch module: numpy Related to numpy support, and also numpy compatibility of our operators labels Apr 19, 2020
Comment on lines +91 to +92
Tensor result = at::empty({0}, self.options());
return out_impl(result, self);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this? don't we only call this function for complex tensors?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahhh I see. you are calling this method from abs for non complex tensors as well. I think it's a little misleading to call this method for non complex inputs because of its name. So, I think we can do two things here:

  1. change the name of the function to make it more generic.
  2. call this method for only complex inputs.

Whatever you prefer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe just add a word or two like
unary_op_impl_with_complex_to_float

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think it will not be intuitive to a developer who is not familiar with this PR and looks at this code and it could be confusing. I would caution on the side of either removing "complex_to_float" from the name or adding descriptive comments specifically mentioning this function is also being used for real values.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the comment.

@@ -53,6 +80,18 @@ static inline Tensor unary_op_impl(const Tensor& self, OutImpl& out_impl) {
return out_impl(result, self);
}

template <typename OutImpl>
static inline Tensor unary_op_impl_complex_to_float(const Tensor& self, OutImpl& out_impl) {
if (self.is_complex()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add TORCH_INTERNAL_ASSERT(self.is_complex());

@anjali411 anjali411 self-requested a review April 21, 2020 18:49
Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some minor comments but looks good otherwise :)

@dr-ci
Copy link

dr-ci bot commented Apr 21, 2020

💊 Build failures summary and remediations

As of commit 5ee8db2 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (1/2)

Step: "Build" (full log | pattern match details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/workflows-docker-builder.yml 
Auto-merging .circleci/verbatim-sources/workflows-docker-builder.yml 
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/pytorch-job-specs.yml 
Auto-merging .circleci/verbatim-sources/pytorch-job-specs.yml 
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/commands.yml 
Auto-merging .circleci/verbatim-sources/commands.yml 
CONFLICT (add/add): Merge conflict in .circleci/docker/build.sh 
Auto-merging .circleci/docker/build.sh 
CONFLICT (add/add): Merge conflict in .circleci/config.yml 
Auto-merging .circleci/config.yml 
Automatic merge failed; fix conflicts and then commit the result. 

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (2/2)

Step: "Build" (full log | pattern match details | 🔁 rerun)

Apr 22 17:42:18 torch_xla/csrc/ops/ops.cpp:179:37: error: no member named 'hardsigmoid_backward' in namespace 'c10::aten'
torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/all_to_all.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/all_to_all.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/ops.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/ops.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
kages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/arg_max.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/arg_max.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
s/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/threshold.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/threshold.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
rch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/max_pool_nd.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/max_pool_nd.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
e/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/rrelu_with_noise.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/rrelu_with_noise.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
kages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c torch_xla/csrc/ops/squeeze.cpp -o build/temp.linux-x86_64-3.6/torch_xla/csrc/ops/squeeze.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1 
Apr 22 17:42:18 torch_xla/csrc/ops/ops.cpp:168:37: error: no member named 'hardsigmoid' in namespace 'c10::aten' 
Apr 22 17:42:18   return GenericOp(OpKind(at::aten::hardsigmoid), {input}, input.shape(), 
Apr 22 17:42:18                           ~~~~~~~~~~^ 
Apr 22 17:42:18 torch_xla/csrc/ops/ops.cpp:179:37: error: no member named 'hardsigmoid_backward' in namespace 'c10::aten' 
Apr 22 17:42:18   return GenericOp(OpKind(at::aten::hardsigmoid_backward), {grad_output, input}, 
Apr 22 17:42:18                           ~~~~~~~~~~^ 
Apr 22 17:42:24 2 errors generated. 
Apr 22 17:42:24 /opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py:306: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. 
Apr 22 17:42:24   warnings.warn(msg.format('we could not find ninja.')) 
Apr 22 17:42:24 error: command 'clang-9' failed with exit status 1 
Apr 22 17:42:24 + cleanup 
Apr 22 17:42:24 + retcode=1 
Apr 22 17:42:24 + set +x 
Apr 22 17:42:24 =================== sccache compilation log =================== 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 5 times.

@mruberry mruberry requested a review from anjali411 April 22, 2020 02:05
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in f771c96.

@mruberry mruberry deleted the angle_complex_to_float branch May 16, 2020 04:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: complex Related to complex number support in PyTorch module: numpy Related to numpy support, and also numpy compatibility of our operators
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants