Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Quantized Version of Threshold Function #39352

Closed

Conversation

paulshaoyuqiao
Copy link

Summary:
In this task, the quantized backend of the kernel is implemented for the threshold function, which clamps the entries in a tensor less than or equal to a given threshold to be a specified value.

The corresponding Python implementation and unit test are also added.

Test Plan:

  1. On a devserver, build PyTorch from source by running the command buck build mode/dev //caffe2:torch
  2. Run the unit test throught the command
    buck test mode/dev //caffe2/test:quantization -- test_qthreshold

Differential Revision: D21822446

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21822446

@dr-ci
Copy link

dr-ci bot commented Jun 1, 2020

💊 CI failures summary and remediations

As of commit 4a7f9b5 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


1 failure confirmed as flaky and can be ignored:

  • pytorch_linux_bionic_py3_8_gcc9_test

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 17 times.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21822446

Copy link
Contributor

@z-a-f z-a-f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good -- minor changes

aten/src/ATen/native/quantized/cpu/qthreshold.cpp Outdated Show resolved Hide resolved
test/quantization/test_quantized_op.py Show resolved Hide resolved
torch/nn/quantized/functional.py Outdated Show resolved Hide resolved
torch/nn/quantized/functional.py Outdated Show resolved Hide resolved
torch/nn/quantized/functional.py Outdated Show resolved Hide resolved
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21822446

Copy link
Contributor

@z-a-f z-a-f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM -- will leave it to other reviewers, in case they have comments

@vkuzo
Copy link
Contributor

vkuzo commented Jun 5, 2020

(optional) since output quantization parameters are set equal to the input ones, would it make sense to do this in the quantized domain without doing dq -> float -> q? Doesn't have to be in scope for this task.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21822446

Summary:
Pull Request resolved: pytorch#39352

In this task, the quantized backend of the kernel is implemented for the threshold function, which clamps the entries in a tensor less than or equal to  a given threshold to be a specified value.

The corresponding Python implementation and unit test are also added.

Test Plan:
1. On a devserver, build PyTorch from source by running the command `buck build mode/dev //caffe2:torch`
2. Run the unit test throught the command
`buck test mode/dev //caffe2/test:quantization -- test_qthreshold`

Reviewed By: z-a-f

Differential Revision: D21822446

fbshipit-source-id: 4b3e86508642ec8c30d5087e3b76c026fac72c54
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21822446

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 6a75f65.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants