Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] Error: "Conversion of function torch._ops.aten.aten::cumsum not currently supported!" on CUDA 11.8 and 12.1 #3214

Open
zewenli98 opened this issue Oct 6, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@zewenli98
Copy link
Collaborator

zewenli98 commented Oct 6, 2024

Bug Description

In the CI tests, all tests of cumsum was failed on CUDA 11.8 and 12.1, but works on 12.4. The error is like:

FAILED conversion/test_cumsum_aten.py::TestCumsumConverter::test_cumsum_1D_0 - torch_tensorrt.dynamo.conversion._TRTInterpreter.UnsupportedOperatorException: Conversion of function torch._ops.aten.aten::cumsum not currently supported!
FAILED conversion/test_cumsum_aten.py::TestCumsumConverter::test_cumsum_1D_1 - torch_tensorrt.dynamo.conversion._TRTInterpreter.UnsupportedOperatorException: Conversion of function torch._ops.aten.aten::cumsum not currently supported!
FAILED conversion/test_cumsum_aten.py::TestCumsumConverter::test_cumsum_1D_2 - torch_tensorrt.dynamo.conversion._TRTInterpreter.UnsupportedOperatorException: Conversion of function torch._ops.aten.aten::cumsum not currently supported!
...

It also works on my local machine with RTX 4080 + CUDA 12.2.

@zewenli98 zewenli98 added the bug Something isn't working label Oct 6, 2024
@zewenli98 zewenli98 self-assigned this Nov 7, 2024
@zewenli98
Copy link
Collaborator Author

The error is due to oversubscription of resources. lowering parallel number on CI works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant