Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pp parser memory fix #587

Merged
merged 6 commits into from
Oct 15, 2020
Merged

Conversation

eimrek
Copy link
Member

@eimrek eimrek commented Oct 15, 2020

Hello!

This PR considerably reduces the memory usage of the pp.x parser. Before, all the retrieved files were loaded into memory and then parsed. This quickly ate up all the RAM and made our system crash for some of our use cases (e.g. we were parsing many orbital cubes files). Now parsing happens right after each retrieved file is read (and the memory is freed when the next retrieved file is read).

Copy link
Contributor

@sphuber sphuber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @eimrek . Change looks all good, but would be good to add comment as described in my comment.

aiida_quantumespresso/parsers/pp.py Show resolved Hide resolved
@eimrek eimrek requested a review from sphuber October 15, 2020 15:30
@sphuber sphuber merged commit 95586d1 into aiidateam:develop Oct 15, 2020
@sphuber
Copy link
Contributor

sphuber commented Oct 15, 2020

Thanks @eimrek !

sphuber pushed a commit that referenced this pull request Oct 21, 2020
Before, the content of all retrieved files was loaded into memory and
then parsed. For big calculations this can quickly run up the RAM
required even though not necessary. By parsing the raw data as soon as
its read and then deleting the raw data from memory, the memory footprint
is reduced significantly.

Cherry-picked from 95586d1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants