Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove strict TensorType.broadcastable usage from local_elemwise_alloc #1102

Merged

Conversation

brandonwillard
Copy link
Member

@brandonwillard brandonwillard commented Aug 8, 2022

Closes #1094

  • Inspect the types of graphs produced by the broadcasting used in this approach and make sure they're fully reduced.
  • Investigate the effects on in-placing given the new BroadcastTo Ops introduced. Would reordering the broadcasted element(s) help guarantee that the resulting Elemwise can be in-placed.

@brandonwillard brandonwillard force-pushed the fix-bcast-local_elemwise_alloc branch from 71b1d20 to ba4aba0 Compare August 8, 2022 04:29
@brandonwillard brandonwillard changed the title Remove stict TensorType.broadcastable usage from local_elemwise_alloc Remove strict TensorType.broadcastable usage from local_elemwise_alloc Aug 8, 2022
@brandonwillard brandonwillard marked this pull request as draft August 8, 2022 04:29
@brandonwillard brandonwillard self-assigned this Aug 8, 2022
@brandonwillard brandonwillard force-pushed the fix-bcast-local_elemwise_alloc branch 3 times, most recently from 786b141 to 0d5a126 Compare August 11, 2022 16:18
@brandonwillard brandonwillard marked this pull request as ready for review August 11, 2022 16:24
@brandonwillard
Copy link
Member Author

Here are the graphs from each local_elemwise_alloc test case:

tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape0-y_shape0]
Elemwise{mul,no_inplace} [id A] 3
 |Alloc [id B] 2
 | |TensorConstant{1} [id C]
 | |Shape_i{0} [id D] 1
 | | |y [id E]
 | |Shape_i{1} [id F] 0
 |   |y [id E]
 |x [id G]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape1-y_shape1]
Elemwise{mul,no_inplace} [id A] 3
 |Alloc [id B] 2
 | |TensorConstant{1} [id C]
 | |Shape_i{0} [id D] 1
 | | |y [id E]
 | |Shape_i{1} [id F] 0
 |   |y [id E]
 |x [id G]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape2-y_shape2]
Elemwise{mul,no_inplace} [id A] 6
 |x [id B]
 |Assert{msg=Shapes must be equal} [id C] 5
   |y [id D]
   |All [id E] 4
     |Elemwise{eq,no_inplace} [id F] 3
       |MakeVector{dtype='int64'} [id G] 2
       | |Shape_i{0} [id H] 1
       | | |y [id D]
       | |Shape_i{1} [id I] 0
       |   |y [id D]
       |TensorConstant{[2 3]} [id J]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape3-y_shape3]
Elemwise{Mul}[(0, 0)] [id A] 2
 |Alloc [id B] 1
 | |x [id C]
 | |TensorConstant{1} [id D]
 | |TensorConstant{3} [id E]
 |InplaceDimShuffle{x,x} [id F] 0
   |y [id G]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape4-y_shape4]
Elemwise{mul,no_inplace} [id A] 2
 |InplaceDimShuffle{x} [id B] 1
 | |y [id C]
 |Alloc [id D] 0
   |TensorConstant{1} [id E]
   |x [id F]
XPASS (Not implemented)
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape5-y_shape5]
Elemwise{mul,no_inplace} [id A] 1
 |Alloc [id B] 0
 | |x [id C]
 | |TensorConstant{15} [id D]
 | |TensorConstant{1} [id E]
 |y [id F]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape6-y_shape6]
Elemwise{mul,no_inplace} [id A] 6
 |Assert{msg=Shapes must be equal} [id B] 5
 | |x [id C]
 | |All [id D] 4
 |   |Elemwise{eq,no_inplace} [id E] 3
 |     |MakeVector{dtype='int64'} [id F] 2
 |     | |Shape_i{0} [id G] 1
 |     | | |x [id C]
 |     | |Shape_i{1} [id H] 0
 |     |   |x [id C]
 |     |TensorConstant{[15  2]} [id I]
 |y [id J]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape7-y_shape7]
Elemwise{Mul}[(0, 0)] [id A] 2
 |Alloc [id B] 1
 | |x [id C]
 | |TensorConstant{15} [id D]
 | |TensorConstant{1} [id E]
 |Alloc [id F] 0
   |y [id G]
   |TensorConstant{15} [id D]
   |TensorConstant{1} [id E]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape8-y_shape8]
Elemwise{mul,no_inplace} [id A] 12
 |Assert{msg=Shapes must be equal} [id B] 11
 | |x [id C]
 | |All [id D] 10
 |   |Elemwise{eq,no_inplace} [id E] 9
 |     |MakeVector{dtype='int64'} [id F] 8
 |     | |Shape_i{0} [id G] 7
 |     | | |x [id C]
 |     | |Shape_i{1} [id H] 6
 |     |   |x [id C]
 |     |TensorConstant{[15  2]} [id I]
 |Assert{msg=Shapes must be equal} [id J] 5
   |y [id K]
   |All [id L] 4
     |Elemwise{eq,no_inplace} [id M] 3
       |MakeVector{dtype='int64'} [id N] 2
       | |Shape_i{0} [id O] 1
       | | |y [id K]
       | |Shape_i{1} [id P] 0
       |   |y [id K]
       |TensorConstant{[15  2]} [id I]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape9-y_shape9]
Elemwise{mul,no_inplace} [id A] 7
 |InplaceDimShuffle{1,0} [id B] 6
 | |Assert{msg=Shapes must be equal} [id C] 5
 |   |x [id D]
 |   |All [id E] 4
 |     |Elemwise{eq,no_inplace} [id F] 3
 |       |MakeVector{dtype='int64'} [id G] 2
 |       | |Shape_i{0} [id H] 1
 |       | | |x [id D]
 |       | |Shape_i{1} [id I] 0
 |       |   |x [id D]
 |       |TensorConstant{[15  2]} [id J]
 |y [id K]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape10-y_shape10]
Elemwise{mul,no_inplace} [id A] 2
 |InplaceDimShuffle{x,0,1} [id B] 1
 | |x [id C]
 |InplaceDimShuffle{x,0,1} [id D] 0
   |y [id E]
PASSED
tests/tensor/test_basic_opt.py::test_local_elemwise_alloc[<lambda>-x_shape11-y_shape11]
Elemwise{mul,no_inplace} [id A] 2
 |InplaceDimShuffle{x,1,0} [id B] 1
 | |x [id C]
 |InplaceDimShuffle{x,0,1} [id D] 0
   |y [id E]
PASSED

Aside from the shape-specifying inputs to the Alloc Ops in these tests, the inputs have no static shape information. None of the resulting graphs contain BroadcastTo Ops, so it looks like our rewrites are doing their jobs.

@brandonwillard brandonwillard force-pushed the fix-bcast-local_elemwise_alloc branch 5 times, most recently from 62c3aa6 to f1267ec Compare August 12, 2022 04:57
@codecov
Copy link

codecov bot commented Aug 12, 2022

Codecov Report

Merging #1102 (b07bc49) into main (cfc931f) will decrease coverage by 0.01%.
The diff coverage is 80.70%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1102      +/-   ##
==========================================
- Coverage   79.28%   79.27%   -0.02%     
==========================================
  Files         151      151              
  Lines       48005    48004       -1     
  Branches    10922    10919       -3     
==========================================
- Hits        38061    38053       -8     
- Misses       7441     7447       +6     
- Partials     2503     2504       +1     
Impacted Files Coverage Δ
aesara/tensor/subtensor_opt.py 86.81% <71.42%> (-0.31%) ⬇️
aesara/tensor/basic_opt.py 85.69% <86.66%> (-0.21%) ⬇️
aesara/tensor/extra_ops.py 88.92% <92.85%> (-0.02%) ⬇️
aesara/tensor/math_opt.py 87.14% <0.00%> (-0.13%) ⬇️

tests/tensor/test_extra_ops.py Show resolved Hide resolved
tests/tensor/test_basic_opt.py Outdated Show resolved Hide resolved
if np.any(idx != np.arange(length)):
return False
elif idx.owner is not None and isinstance(idx.owner.op, ARange):
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of missing coverage, I guess that was already the case before?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, unfortunately.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think about it, that whole ARange branch is effectively unreachable and/or pointless, because, if idx.owner.inputs are all constants (as the rewrite requires), the entire node would've been constant folded before reaching that point (i.e. there would be no ARange node at all).

@brandonwillard brandonwillard force-pushed the fix-bcast-local_elemwise_alloc branch from f1267ec to b07bc49 Compare August 13, 2022 22:16
@brandonwillard brandonwillard merged commit 2ed28f3 into aesara-devs:main Aug 13, 2022
@brandonwillard brandonwillard deleted the fix-bcast-local_elemwise_alloc branch August 13, 2022 23:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working graph rewriting important
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Incorrect broadcasting logic in local_elemwise_alloc
2 participants