-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash in "SGBN Add Relu Graph" Test Case When All Tensor Dimensions Are the Same #122
Comments
I checked Where can I look for documentation that matches the reality of the code? |
https://github.com/NVIDIA/cudnn-frontend/blob/main/include/cudnn_frontend/plans.h#L84 this line throw error. calls calls https://github.com/NVIDIA/cudnn-frontend/blob/main/include/cudnn_frontend_Filters.h#L31 and this from and to size is 0 |
Hi @bokutotu , Thanks for the question. In this sample, the channel count should be a multiple of 8 for half precision input. Can, you please check a dim like Please see documentation here
For your question
This is used to set the file for frontend log file. I noticed there is a typo -Anerudhan |
Describe the bug
When all dimensions of the tensor are the same number in the "SGBN Add Relu Graph" test case, the program crashes. This issue occurs during the batch normalization process, which should be a valid scenario.
Expected behavior
The program should handle tensors with all dimensions being the same number without crashing, and the batch normalization process should complete successfully.
System Environment (please complete the following information):
API logs
Please attach API logs for both cudnn_frontend and cudnn_backend.
frontend
To Reproduce
Steps to reproduce the behavior:
Additional context
This issue seems to be related to how cudnn-frontend handles tensors with identical dimensions. It might be a bug in the cudnn-frontend library.
The text was updated successfully, but these errors were encountered: