You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
upczww
changed the title
[Model/Framework] FasterTransformer v3.0 decoding doesn't work with small vocab_size
[FastTransformer v3.0/Pytorch] FasterTransformer v3.0 decoding doesn't work with small vocab_size
Nov 6, 2020
Related to Model/Framework(s)
FasterTransformer/v3.0
Describe the bug
Runed FasterTransformer decoding under FP32 on PyTorch with:
where the vocab_size is 3153 instead of the original 31538, it raised error:
To Reproduce
Steps to reproduce the behavior:
1.build with Pytorch image
nvcr.io/nvidia/pytorch:20.03-py3
2.install opennmt-py:
3.generate GEMM config:
4.run decoding_sample
Expected behavior
The decoding sample worked with vocab_size=31538 as the decoding demos,when I decreased the vocab_size, it raised error.
Environment
Please provide at least:
The text was updated successfully, but these errors were encountered: