You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
🐛 Bug
Hello, I'm trying to use nnfusion to gen cuda code for transformer-based model provided by hugging face. But after I get onnx format model by optimum-cli, I cannot deliver this onnx model to nnfusion correctly. I wonder whether I get the model right or nnfusion cannot smoothly transform these kind of transformer-based model?
To Reproduce
Steps to reproduce the behavior:
use optimum-cli to convert the hugging face checkpoint to onnx format model:
🐛 Bug
Hello, I'm trying to use nnfusion to gen cuda code for transformer-based model provided by hugging face. But after I get onnx format model by
optimum-cli
, I cannot deliver this onnx model to nnfusion correctly. I wonder whether I get the model right or nnfusion cannot smoothly transform these kind of transformer-based model?To Reproduce
Steps to reproduce the behavior:
optimum-cli export onnx --model google/vit-base-patch16-224-in21k vit-base-patch16-224-in21k_onnx_32/ --batch_size 32
(
--bath_size
arg seems not change the onnx output when I visualize both by netron)2. use nnfusion to convert (ERROR)
nnfusion ./vit-base-patch16-224-in21k_onnx_32/model.onnx -f onnx -p "batch_size:32;num_channels:3;height:224;width:224"
and the error output meg:
Expected behavior
Successfully generate CUDA code and run well.
Additional context
I use the docker image from your repo, it goes well when I just use the model nnfusion provided in example.
The text was updated successfully, but these errors were encountered: