-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse reshape op #7125
Sparse reshape op #7125
Changes from all commits
81e4c7f
8e2ce9a
d719346
4e160e9
3104113
24116fd
e8b223f
ff29e0c
fe3f7de
4fd2e57
3f5de52
a521c1b
04da7d4
c32b2dd
fa5def3
dc8d1ce
2d48888
2e017fd
b7000ac
ea354d4
f14672a
0690155
8c1f1f4
2730eff
58bae15
b1cbce0
bee77e0
5e08bcf
9954551
7508985
ad50069
c1c50bc
6ad2e83
4b105c3
3b47ba7
3194b59
424b6fd
e64c12e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1320,3 +1320,48 @@ def adv_index(inputs): | |
Output tensor. | ||
""" | ||
return _make.adv_index(Tuple(inputs)) | ||
|
||
|
||
def sparse_reshape(sparse_indices, prev_shape, new_shape): | ||
""" | ||
Reshape a Sparse Tensor | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you note that this function only support tensors in COO format, not CSR. In other parts of the codebase, we tend to use CSR. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you explain how this convention is different from the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The convention is the same as |
||
|
||
Parameters | ||
---------- | ||
sparse_indices : relay.Expr | ||
A 2-D tensor[N, n_dim] of integers containing location of sparse values, where N is the | ||
number of sparse values and n_dim is the number of dimensions of the dense_shape | ||
prev_shape : relay.Expr | ||
A 1-D tensor containing the previous shape of the dense tensor | ||
new_shape : relay.Expr | ||
A 1-D tensor containing the new shape of the dense tensor | ||
|
||
Returns | ||
------- | ||
result: relay.Expr | ||
Output tensor. | ||
Examples | ||
-------- | ||
.. code-block:: python | ||
|
||
codeislife99 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
sparse_indices = [[0, 0, 0], | ||
[0, 0, 1], | ||
[0, 1, 0], | ||
[1, 0, 0], | ||
[1, 2, 3]] | ||
|
||
prev_shape = [2, 3, 4] | ||
|
||
new_shape = [9, -1] | ||
|
||
new_sparse_indices, new_shape = relay.sparse_reshape(sparse_indices, | ||
prev_shape, | ||
new_shape) | ||
new_sparse_indices = [[0, 0], | ||
[0, 1], | ||
[1, 2], | ||
[4, 2], | ||
[8, 1]] | ||
new_shape = [9, 4] | ||
""" | ||
return TupleWrapper(_make.sparse_reshape(sparse_indices, prev_shape, new_shape), 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main complaint is that this will fail with dynamic input shapes. From what I understand, you expect multiple chained dynamically-shaped sparse ops in the model you're trying to target, so I'm hesitant to merge this because I'm under the impression that this will not solve the larger problem you're trying to solve.
I'd really like to see you either test the model in a branch containing all three of your PRs, or write a unit test with a representative subgraph.