Commit 0a7d8b4
Create a quantized in-palce version CUDA ReLU function, relu_quantized_cuda_. (pytorch#85670)
Summary:
this and pytorch#85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda.
Test Plan:
python test/test_quantization.py
Previous PR that has been reverted: pytorch#85502.
Pull Request resolved: pytorch#85670
Approved by: https://github.com/dzdang, https://github.com/z-a-f1 parent eb650ab commit 0a7d8b4
File tree
3 files changed
+22
-12
lines changed- aten/src/ATen/native
- quantized/cuda
- test/quantization/core
3 files changed
+22
-12
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
4330 | 4330 | | |
4331 | 4331 | | |
4332 | 4332 | | |
| 4333 | + | |
4333 | 4334 | | |
4334 | 4335 | | |
4335 | 4336 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
255 | 255 | | |
256 | 256 | | |
257 | 257 | | |
258 | | - | |
259 | | - | |
260 | | - | |
261 | | - | |
262 | | - | |
263 | | - | |
264 | | - | |
265 | | - | |
266 | | - | |
267 | | - | |
268 | | - | |
269 | | - | |
270 | 258 | | |
271 | 259 | | |
272 | 260 | | |
| |||
0 commit comments