-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spacy and Berkeley parser Multi-processing #85
Comments
I also tried running the spacy pipeline using GPU by adding the codes below, but it does not seem to give much boost. import spacy
import torch
spacy.prefer_gpu()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
The GPU did not improve any performance, I guess it's because the data still have to be processed by the spacy first. |
If you initiate 2 nlp modules in spacy, one is normal modules and another one is a module that combined with the benepar, when you process sentences, the normal modules goes through all process and the benepar only uses parser module, this will give you roughly 2x processing speed.
If you use multi-thread to process it independently, I guess the speed will be further improved to 4x compared with the original speed. However, I do not know how to work spacy in multi-thread situations. |
I'm trying to make the multi-processing spacy pipeline works with the berkeley parser as I assume it will boost the performance. How can I get it to work? I tried the suggestion from here, but it didn't work for me.
Error message
The text was updated successfully, but these errors were encountered: