-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training from scratch issue #161
Comments
Hi @urialon, I am using a Quadro M4000 GPU of 8GB. From the above lines, could you please tell me which one says that the GPU is out of memory? I tried to reduce the batch size but it did not work. I will consider your Long |
The `OOM when allocating tensor` means that the GPU is Out Of Memory.
The model itself is very memory hungry, because it has huge vocabularies,
and thus huge embedding matrices.
I think that maybe Google Colab has GPUs with more memory that are freely
available.
Best,
Uri
…On Fri, Aug 19, 2022 at 9:08 AM ngohlong ***@***.***> wrote:
Hi @urialon <https://github.com/urialon>,
Thank you so much for your quick response.
I am using a Quadro M4000 GPU of 8GB. From the above lines, could you
please tell me which one says that the GPU is out of memory? I tried to
reduce the batch size but it did not work.
I will consider your code2seq model but I still want to study the code2vec
one.
Long
—
Reply to this email directly, view it on GitHub
<#161 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSOXMFTLCCELAMVWHE2LQDVZ6BOTANCNFSM57AE3AGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Thank you so much for your answer, Uri. I really appreciate that. Best regards, |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
I would like to train the model from scratch with the java14m dataset. However, I encountered an issue as below. Could you please help me to solve this? Thank you so much in advance.
Best regards,
Long Ngo
The text was updated successfully, but these errors were encountered: