Replies: 2 comments
-
yes that’s expected, changing the BPE vocab size only alters the number of parameters of the last layer, the difference is too trivial to be reflected on GPU memory usage. bestjin On 17 Apr 2024, at 19:32, iggygeek ***@***.***> wrote:
Hello ! I compared two runs with the Librispeech zipformer2 recipe (with my dataset) with BPE vocab size = 500 and 250. However the GPU memory usage is ~9.5GB for both runs. Is that expected ?
Other parametres like max-duration are the same...
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
iggygeek
-
Yes perfectly clear |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello ! I compared two runs with the Librispeech zipformer2 recipe (with my dataset) with BPE vocab size = 500 and 250. However the GPU memory usage is ~9.5GB for both runs. Is that expected ?
Other parametres like max-duration are the same...
Beta Was this translation helpful? Give feedback.
All reactions