We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
According to the document MaxTokenBucketizer buffer_size – This restricts how many tokens are taken from prior DataPipe to bucketize
However, in the code, bucketbatcher.py#L277 The unit of buffer_size is sample not token
The text was updated successfully, but these errors were encountered:
Thanks for reporting it. Feel free to open a PR to fix the inline doc.
Sorry, something went wrong.
63e5f2f
Fix the document of buffer_size in max_token_bucketize (pytorch#834)
eaec62c
Summary: This PR would fix a document issue in bucketbatcher.py Fixes pytorch#831 Pull Request resolved: pytorch#834 Reviewed By: NivekT Differential Revision: D40430887 Pulled By: ejguan fbshipit-source-id: e132a3a24e8d09815c36bba3ccd4ffaced7b17d4
Fix the document of buffer_size in max_token_bucketize (#834)
a8fd731
Summary: This PR would fix a document issue in bucketbatcher.py Fixes #831 Pull Request resolved: #834 Reviewed By: NivekT Differential Revision: D40430887 Pulled By: ejguan fbshipit-source-id: e132a3a24e8d09815c36bba3ccd4ffaced7b17d4
6b56307
Successfully merging a pull request may close this issue.
According to the document MaxTokenBucketizer
buffer_size – This restricts how many tokens are taken from prior DataPipe to bucketize
However, in the code, bucketbatcher.py#L277
The unit of buffer_size is sample not token
The text was updated successfully, but these errors were encountered: