Skip to content

Commit dadfe1c

Browse files
goldenxuettpytorchmergebot
authored andcommitted
Add nondeterministic tags in tags.yaml and add the nondeterministic_seeded tag to all functions in native_functions.yaml defined as nondeterministic by alias_analysis.cpp (pytorch#81440)
- This PR adds the nondeterministic tag to tags.yaml to specify functions that may not necessarily return the same outputs when ran with identical inputs. - The tag is added to the functions in native_functions.yaml that are specified as nondeterministic by aliasdb in https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.cpp#L1146 - **Thus there may be ops that are nondeterministic that currently do not have the nondeterministic tag but should. The plan is to create a test bench to determine which ops in native_functions.yaml are nondeterministic and add the tag to qualifying functions in a later pr.** Pull Request resolved: pytorch#81440 Approved by: https://github.com/anjali411
1 parent 6cf0d92 commit dadfe1c

File tree

3 files changed

+33
-0
lines changed

3 files changed

+33
-0
lines changed

aten/src/ATen/native/native_functions.yaml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,7 @@
210210
variants: function
211211
dispatch:
212212
CUDA: fused_dropout_cuda
213+
tags: nondeterministic_seeded
213214

214215
- func: _masked_scale(Tensor self, Tensor mask, float scale) -> Tensor
215216
variants: function
@@ -221,6 +222,7 @@
221222
dispatch:
222223
CPU: native_dropout_cpu
223224
CUDA: native_dropout_cuda
225+
tags: nondeterministic_seeded
224226

225227
- func: native_dropout_backward(Tensor grad_output, Tensor mask, float scale) -> Tensor
226228
dispatch:
@@ -243,6 +245,7 @@
243245
dispatch:
244246
CompositeImplicitAutograd: dropout
245247
NestedTensorCPU, NestedTensorCUDA: dropout_nested
248+
tags: nondeterministic_seeded
246249

247250
- func: dropout_(Tensor(a!) self, float p, bool train) -> Tensor(a!)
248251
dispatch:
@@ -892,6 +895,7 @@
892895
variants: function, method
893896
dispatch:
894897
CompositeExplicitAutograd: bernoulli
898+
tags: nondeterministic_seeded
895899

896900
- func: bernoulli.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!)
897901
device_check: NoCheck # TensorIterator
@@ -924,6 +928,7 @@
924928
- func: bernoulli.p(Tensor self, float p, *, Generator? generator=None) -> Tensor
925929
device_check: NoCheck # TensorIterator
926930
variants: function, method
931+
tags: nondeterministic_seeded
927932

928933
- func: bilinear(Tensor input1, Tensor input2, Tensor weight, Tensor? bias=None) -> Tensor
929934

@@ -3741,6 +3746,7 @@
37413746
device_guard: False
37423747

37433748
- func: rand(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
3749+
tags: nondeterministic_seeded
37443750

37453751
- func: rand.generator(int[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
37463752

@@ -3749,12 +3755,15 @@
37493755
- func: rand.generator_out(int[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)
37503756

37513757
- func: rand_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor
3758+
tags: nondeterministic_seeded
37523759

37533760
- func: randint(int high, int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
3761+
tags: nondeterministic_seeded
37543762

37553763
- func: randint.generator(int high, int[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
37563764

37573765
- func: randint.low(int low, int high, int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
3766+
tags: nondeterministic_seeded
37583767

37593768
- func: randint.low_generator(int low, int high, int[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
37603769

@@ -3767,10 +3776,13 @@
37673776
- func: randint.low_generator_out(int low, int high, int[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)
37683777

37693778
- func: randint_like(Tensor self, int high, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor
3779+
tags: nondeterministic_seeded
37703780

37713781
- func: randint_like.low_dtype(Tensor self, int low, int high, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor
3782+
tags: nondeterministic_seeded
37723783

37733784
- func: randn(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
3785+
tags: nondeterministic_seeded
37743786

37753787
- func: randn.generator(int[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
37763788

@@ -3787,8 +3799,10 @@
37873799
- func: randn.generator_out(int[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)
37883800

37893801
- func: randn_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor
3802+
tags: nondeterministic_seeded
37903803

37913804
- func: randperm(int n, *, ScalarType? dtype=long, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
3805+
tags: nondeterministic_seeded
37923806

37933807
- func: randperm.generator(int n, *, Generator? generator, ScalarType? dtype=long, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
37943808

@@ -3957,6 +3971,7 @@
39573971

39583972
- func: rrelu(Tensor self, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor
39593973
device_check: NoCheck # TensorIterator
3974+
tags: nondeterministic_seeded
39603975

39613976
- func: rrelu_(Tensor(a!) self, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor(a!)
39623977
device_check: NoCheck # TensorIterator
@@ -5186,6 +5201,7 @@
51865201
dispatch:
51875202
CPU: _s_gamma_cpu
51885203
CUDA: _s_gamma_cuda
5204+
tags: nondeterministic_seeded
51895205

51905206
- func: _dirichlet_grad(Tensor x, Tensor alpha, Tensor total) -> Tensor
51915207
dispatch:
@@ -5203,12 +5219,14 @@
52035219
dispatch:
52045220
CPU: _s_poisson_cpu
52055221
CUDA: _s_poisson_cuda
5222+
tags: nondeterministic_seeded
52065223

52075224
- func: binomial(Tensor count, Tensor prob, Generator? generator=None) -> Tensor
52085225
device_check: NoCheck # TensorIterator
52095226
dispatch:
52105227
CPU: _s_binomial_cpu
52115228
CUDA: _s_binomial_cuda
5229+
tags: nondeterministic_seeded
52125230

52135231
# When more variants get ported to native, this dispatch will get more
52145232
# complicated
@@ -7756,6 +7774,7 @@
77567774
variants: method, function
77577775
dispatch:
77587776
CPU, CUDA: multinomial
7777+
tags: nondeterministic_seeded
77597778

77607779
- func: lgamma.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
77617780
device_check: NoCheck # TensorIterator
@@ -8446,6 +8465,7 @@
84468465
CPU, CUDA: normal
84478466
MPS: normal_mps
84488467
Meta: normal_meta
8468+
tags: nondeterministic_seeded
84498469

84508470
- func: normal.float_Tensor_out(float mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!)
84518471
dispatch:
@@ -8458,6 +8478,7 @@
84588478
CPU, CUDA: normal
84598479
MPS: normal_mps
84608480
Meta: normal_meta
8481+
tags: nondeterministic_seeded
84618482

84628483
- func: normal.Tensor_Tensor_out(Tensor mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!)
84638484
dispatch:
@@ -8470,6 +8491,7 @@
84708491
CPU, CUDA: normal
84718492
MPS: normal_mps
84728493
Meta: normal_meta
8494+
tags: nondeterministic_seeded
84738495

84748496
- func: normal.float_float(float mean, float std, int[] size, *, Generator? generator=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
84758497

@@ -9710,6 +9732,7 @@
97109732
dispatch:
97119733
CPU: rrelu_with_noise_cpu
97129734
CUDA: rrelu_with_noise_cuda
9735+
tags: nondeterministic_seeded
97139736

97149737
- func: rrelu_with_noise_backward(Tensor grad_output, Tensor self, Tensor noise, Scalar lower, Scalar upper, bool training, bool self_is_result) -> Tensor
97159738
python_module: nn

aten/src/ATen/native/tags.yaml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,11 @@
1616
desc: |
1717
This tag indicates that the operator doesn't have an explicit entry in
1818
native_functions.yaml, and instead was generated automatically by the codegen.
19+
- tag: nondeterministic_seeded
20+
desc: |
21+
This tag indicates if an operator is nondeterminstically seeded (ie is random)
22+
such that the operator intentially produces different results when run twice on the same inputs.
23+
- tag: nondeterministic_bitwise
24+
desc: |
25+
This tag indicates if an operator doesn't guarentee bitwise equivalence
26+
across different runs of an operator with identical inputs.

test/test_public_bindings.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -247,6 +247,8 @@ def test_no_new_bindings(self):
247247
"view_copy",
248248
"generated",
249249
"dynamic_output_shape",
250+
"nondeterministic_bitwise",
251+
"nondeterministic_seeded",
250252
}
251253
torch_C_bindings = {elem for elem in dir(torch._C) if not elem.startswith("_")}
252254

0 commit comments

Comments
 (0)