-
Notifications
You must be signed in to change notification settings - Fork 212
How to express Broadcast with TC language? #626
Comments
It seems that output size specification is not supported #377. I think this one can be solved with
(I am looking at the reference https://facebookresearch.github.io/TensorComprehensions/semantics.html) |
@shekhovt That's great, thank you! What about |
Note that you can use look-up on the right hand side, like As I understand it, the language allows to do multi-dimensional reductions and lookups, but not going to be very useful for simple standalone operations you mentioned. There would be an implementation in pytorch or it would be straightforward to do it with CUDA extensions https://pytorch.org/tutorials/advanced/cpp_extension.html |
@shekhovt For broadcast: def broadcast(float(N, M) I0, int K) -> (O) {
O(n, m, k) = I0(n, m) where k in 0:K
} How should I fill the second parameter? It is not a tensor but a scaler. If I didn't fill this value: terminate called after throwing an instance of 'lang::ErrorReport'
what():
expected ) but found 'ident' here::
def broadcast(float(N, M) I0, int K) -> (O) {
~ <--- HERE
O(n, m, k) = I0(n, m) where k in 0:K
} |
Can you actually compile TC with new CUDA and pytorch? I got now conda install TC with pytorch 0.3.1.post3 :( Does not seem very useful even if it can implement and autotune the op. |
@shekhovt No, I am just using CUDA9.0 + Pytorch < 1.0 + TC and make them in docker to avoid the environment pollution, because I only need the source code it generates, which can satisfy my requirement. |
Most of TC examples are actually for tensor reshape or reduce. Any examples for tensor broadcast?
The following try isn't supported by TC:
def broadcast(float(N, M) I0) -> (O) { O(n, m, k) = I0(n, m) }
or
def broadcast(float(N, M) I0) -> (float(N, M, K) O) { O(n, m, k) = I0(n, m) }
The text was updated successfully, but these errors were encountered: