Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory-efficient posterior generation #263

Merged
merged 8 commits into from
Oct 31, 2023

Conversation

sjfleming
Copy link
Member

It has become apparent that something during the posterior generation process in v0.3.0 is gobbling up way too much memory, more than previous versions. See #251 #248

Conceptually, in v2:

  • for each minibatch
    • compute the posterior
    • estimating the noise counts
    • keep only the noise counts

Conceptually, in v3:

  • for each minibatch
    • compute the posterior
    • keep a sparse representation in memory
  • save the full sparse posterior as an h5 file
  • use an "estimator" applied to the full sparse posterior

This refactor allows us to do a whole lot more. But it also involves computing and saving the full posterior, which was not attempted in v2. While it is perfectly doable (these posterior h5 files are usually less than 2GB), it seems it needed to be done a bit more carefully.

I think the extension of python lists left around objects in memory (by creating references to them) that I did not intend.

Adopting another strategy: keep a python list of (sparsified info as) torch tensors. Append tensors to the lists each minibatch. Concatenate them once and for all at the end. All these tensors are cloned from the originals, detached, and kept in cpu memory.

Closes #248
Closes #251

@jg9zk
Copy link

jg9zk commented Aug 29, 2023

This branch stopped memory from being used up during the for loop, but my job was killed due to OOM sometime after. I'm running on 140 gb, which should be plenty

@sjfleming
Copy link
Member Author

@jg9zk thanks for reporting. Any chance you could post the last few lines of the log file?

@jg9zk
Copy link

jg9zk commented Aug 29, 2023

cellbender:remove-background: Working on chunk (377/383)
cellbender:remove-background: Working on chunk (378/383)
cellbender:remove-background: Working on chunk (379/383)
cellbender:remove-background: Working on chunk (380/383)
cellbender:remove-background: Working on chunk (381/383)
cellbender:remove-background: Working on chunk (382/383)
cellbender:remove-background: Working on chunk (383/383)
Killed

@jg9zk
Copy link

jg9zk commented Aug 29, 2023

OOM seems to occur in either line 549 or 550 of posterior.py in commit 6fd8c23 (noise_offset_dict creation)

@sjfleming
Copy link
Member Author

I was able to reproduce that same behavior @jg9zk

sjfleming and others added 2 commits August 29, 2023 23:21
* Speed up MCKP _gene_chunk_iterator() by a factor of 100
@jg9zk
Copy link

jg9zk commented Aug 31, 2023

I tried commit 7fd0ac and it completed! However, it looks like counts are being added to the count matrix instead of removed, but I'll open a separate issue about that.

@sjfleming sjfleming marked this pull request as ready for review October 20, 2023 18:28
@sjfleming sjfleming requested a review from mbabadi October 20, 2023 18:28
@sjfleming sjfleming merged commit 322971d into dev Oct 31, 2023
sjfleming added a commit that referenced this pull request Oct 31, 2023
* Add WDL input to set number of retries. (#247)

* Move hash computation so that it is recomputed on retry, and now-invalid checkpoint is not loaded. (#258)

* Bug fix for WDL using MTX input (#246)

* Memory-efficient posterior generation (#263)

* Fix posterior and estimator integer overflow bugs on Windows (#259)

* Move from setup.py to pyproject.toml (#240)

* Fix bugs with report generation across platforms (#302)

---------

Co-authored-by: kshakir <github@kshakir.org>
Co-authored-by: alecw <alecw@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or improvement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants