-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Views take around as much memory as object #62
Comments
Thank you for pointing this out! It's surprising to me. I extensively memory-profiled when introducing the functionality about a year ago. Right now, I can only think of one thing that would cause this behavior as a result of the design of AnnData being not mature enough: the The other reason for views, besides saving memory (if they work properly) is that one should be able to write to the data matrix of the underlying object, especially in backed mode, that is
should modify the original data (see here). Anyways, this is a severe issue and I'll definitely fix it soon if it persists. |
In the example I gave, I don't think there was anything in the Question about the syntax for views. What should happen in the following code? view = adata[adata.obs["total_counts"] > 500, :]
view.X[view.X != 0] = 0
print(view.X.sum())
print(adata[adata.obs["total_counts"] > 500, :].X.sum()) My assumption is the change on the view should be replicated in the base object. That is, both print statements should return 0. |
Observed the same when filtering an adata object. For demonstration purposies, here, the same filter is applied over and over again: import numpy as np
import scanpy.api as sc
import pandas as pd
import numpy as np
import sys
from pickle import dumps
mat = np.ones((10000, 5000))
obs = pd.DataFrame().assign(n_counts = range(10000))
adata = sc.AnnData(mat, obs)
for i in range(10):
%time adata = adata[adata.obs['n_counts'] > 200, :]
print("Object size: {}M".format(len(dumps(adata))/1e6))
|
Ok, something really shady is going on, which I was not aware of. I'll need to take a deeper look at views of AnnData. Or, @Koncopd, do you have some bandwidth to shed some light on this? |
I've been doing a little bit of digging on this, and have some suspicious about what's causing it. First, a view of a view stores it's parent view in import numpy as np
import scanpy as sc
a = sc.AnnData(np.ones((2, 2)))
v1 = a[0:2, 0:2]
v2 = v1[0:2, 0:2]
v1.isview and v2.isview # True
v2._adata_ref is v1 # True Second, I think that each view can be getting a copy of the expression matrix. In particular, line 752 will make a copy when the array is accessed with a boolean array Here's an example of the memory increasing: import numpy as np
import scanpy as sc
a = sc.AnnData(np.ones((5000, 5000)))
# When view is taken with a slice, no additional memory is used
sc.logging.print_memory_usage() # Memory usage: current 0.28 GB, difference +0.28 GB
v1 = a[0:5000, 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.28 GB, difference +0.00 GB
v2 = v1[0:5000, 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.28 GB, difference +0.00 GB
# Taken with a boolean array, memory use increases
v1 = a[np.ones(5000, dtype=bool), 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.38 GB, difference +0.09 GB
v2 = v1[np.ones(5000, dtype=bool), 0:5000]
sc.logging.print_memory_usage() # Memory usage: current 0.47 GB, difference +0.09 GB My idea for how this could be solved is not to bother actually subsetting X until it's accessed, and just storing the subsetting index until then. This would be updated on every subsequent subset, and should always be an index into the "actual" AnnData. |
I don't know whether we discussed this on Slack already or not, Isaac. But your idea is perfect. I think we also discussed on Slack that we don't want |
Great. I'm pretty sure what needs to be done is write out how to resolve all the different indexing types (i.e. slice of a slice should be a slice, int array of a slice should be an int array, etc.). |
I was playing around with some visualization on a large dataset, when I noticed some surprisingly high memory usage. I think I've narrowed it down to unexpected memory growth from taking views:
My assumption here being: taking a view shouldn't cause noticeable growth in memory usage. I'm pretty sure it's not just how
memory_profiler
is counting objects, sincetop
and ActivityMonitor pick this up as well.The text was updated successfully, but these errors were encountered: