-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOPI] Add ops compute #323
Conversation
Most declaration function are supposed to be used directly used by the user, following same convention as numpy and mxnet/pytorch/tf style. Remove compute prefix from them |
topi/python/topi/nn/softmax.py
Outdated
import tvm | ||
|
||
@tvm.tag_scope(tag='softmax') | ||
def softmax_output(x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename as softmax
topi/python/topi/nn/softmax.py
Outdated
assert len(x.shape) == 2, "only support 2-dim softmax" | ||
M, N = x.shape | ||
k = tvm.reduce_axis((0, N), name='k') | ||
expsum = tvm.compute((M, ), lambda i: \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This softmax is not numerically stable. To make things numerically stable, need to substract the maximum value along the columns first
|
||
Returns | ||
------- | ||
output : tvm.Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be three output arguments, according to what get returned
dheight = tvm.reduce_axis((0, kernel_height)) | ||
dwidth = tvm.reduce_axis((0, kernel_width)) | ||
|
||
temp = tvm.compute((batch, channel, padded_height, padded_width), lambda i, c, h, w: \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as discussed with @Huyuwei we should add a explicit pad operator along with dilate, and call that from pooling and conv
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see #316 for reference of supporting padding arbitrary dimension tensors
Remove 'compute' and add assert for safety Add document fix lint fix softmax
…he#323) * Explain how to generate module library * Small fix
* Explain how to generate module library * Small fix
…he#323) * Explain how to generate module library * Small fix
…he#323) * Explain how to generate module library * Small fix
No description provided.