-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concerning the code efficiency #1
Comments
Yes, the training process will take long time. The WikiKG90Mv2 is a huge dataset and our implemention is a single GPU version. Thus, a mutl-gpu implemention is needed to accelerate the network tunning and experiment process. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, I am reproducing the result of the FTAI entity typing model and I am very interested in the expected time to achieve the reported result, for the training process seems to take way too long time and cannot reach the reported performance at my end. Many thanks!
The text was updated successfully, but these errors were encountered: