Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] nn.dense does not work for dynamic shapes #8441

Closed
ymwangg opened this issue Jul 10, 2021 · 1 comment
Closed

[BUG] nn.dense does not work for dynamic shapes #8441

ymwangg opened this issue Jul 10, 2021 · 1 comment
Labels
relay:op src/relay/op

Comments

@ymwangg
Copy link
Contributor

ymwangg commented Jul 10, 2021

It looks like the native tvm implementation of nn.dense does not handle dynamic shapes correctly though using libs such as mkl, cublas has no issues. The following is the code to reproduce this issue.

import tvm
from tvm import relay
from tvm.relay import create_executor, Any
import numpy as np

A = relay.var("A",shape=[Any(), Any()],dtype="float32")
B = relay.var("B",shape=[Any(), Any()],dtype="float32")

C = relay.nn.dense(A, relay.transpose(B))
f = relay.Function([A, B], C)
mod = tvm.IRModule.from_expr(f)
for target in ["llvm -libs=mkl", "llvm"]:
    dev = tvm.device(target,0)
    executor = create_executor(kind="vm", mod=mod, device=dev, target=target)
    a = np.random.uniform(size=[10,10]).astype("float32")
    b = np.random.uniform(size=[10,10]).astype("float32")
    res = executor.evaluate()(a,b).asnumpy()
    print(np.sum(res))
    ref = np.matmul(a,b)
    print(np.sum(ref))
    np.testing.assert_allclose(res, ref, rtol=1e-5)

Please note nn.batch_matmul works correctly for such cases not using libs:

import tvm
from tvm import relay
from tvm.relay import create_executor, Any
import numpy as np

A = relay.var("A",shape=[1, Any(), Any()],dtype="float32")
B = relay.var("B",shape=[1, Any(), Any()],dtype="float32")

C = relay.nn.batch_matmul(A, relay.transpose(B, axes=[0,2,1]))
f = relay.Function([A, B], C)
mod = tvm.IRModule.from_expr(f)
for target in ["llvm"]:
    dev = tvm.device(target,0)
    executor = create_executor(kind="vm", mod=mod, device=dev, target=target)
    a = np.random.uniform(size=[1,10,10]).astype("float32")
    b = np.random.uniform(size=[1,10,10]).astype("float32")
    res = executor.evaluate()(a,b).asnumpy()
    print(np.sum(res))
    ref = np.matmul(a,b)
    print(np.sum(ref))
    np.testing.assert_allclose(res, ref, rtol=1e-5)
@areusch areusch added the needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it label Oct 19, 2022
@Lunderberg Lunderberg added relay:op src/relay/op and removed needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it labels Oct 28, 2022
@Lunderberg
Copy link
Contributor

Confirmed that the test case still reproduces the error on main.

@tqchen tqchen closed this as completed Sep 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
relay:op src/relay/op
Projects
None yet
Development

No branches or pull requests

4 participants