-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage when deserializing Docs #992
Comments
Found the reason for this behavior, looks like it is not a spacy issue. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have quite a lot of data which I want to process with spacy and then serialize for later usage.
Since I haven't found any documentation on how to save and reload processed Doc objects, I'm using the approach in this issue relying on to_bytes and from_bytes: #636
However, although my pickle-serialzed object (list of byte-encoded Docs) is only around 70MB, deseralizing that (turning each element of the list into a Doc.from_bytes(...)) needs a lot of memory (100% of the 6GB on my virtual machine).
Any idea why that is happening?
Am I doing something wrong with the vocab? Should I reload the one I also serialized (nlp.vocab.dump) or should I use a fresh one (spacy.load('en')) when creating the new Doc objects?
The text was updated successfully, but these errors were encountered: