-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when running rsc.pp.highly_variable_genes #178
Comments
I never seen this error. I also don't think that this is a bug within rapids-singlecell. It looks more like a cupy bug.
This is a quick fix for the affected lines. Just copy this in |
I modified the code according to your suggestion, replacing
with
However, it raised another error:
By the way, the cupy version is 13.1.0 and the rapids version is 24.04.00 |
I have never seen this. I can also not reproduce it at the moment I might be able to look into it on Monday. I would also like to know what I the rsc version you are using and what is the type of your matrix? It would also be helpful if you used the template and gave a reproducible code snippet. |
The rsc version is 0.10.2
It raised error earlier running rsc.pp.filter_genes(adata, min_count=5)
|
I can't reproduce the error. I looks like an error with cupy to me. I think you have an issue there. |
I get a very similar error running I just installed adata = anndata.read_h5ad(f"/path/to/adata.h5ad")
rsc.get.anndata_to_GPU(adata)
rsc.pp.neighbors(adata, n_neighbors=30, use_rep='X_scVI') which throws [/oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:233](https://vscode-remote+ondemand-002esherlock-002estanford-002eedu.vscode-resource.vscode-cdn.net/oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:233): PerformanceWarning: Jitify is performing a one-time only warm-up to populate the persistent cache, this may take a few seconds and will be improved in a future release...
jitify._init_module()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 1
----> 1 rsc.pp.neighbors(adata, n_neighbors=30, use_rep='X_scVI')
2 rsc.tl.umap(adata)
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/rapids_singlecell/preprocessing/_neighbors.py:328, in neighbors(adata, n_neighbors, n_pcs, use_rep, random_state, algorithm, metric, metric_kwds, key_added, copy)
317 X_conn = cp.empty((n_obs, 1), dtype=np.float32)
318 connectivities = fuzzy_simplicial_set(
319 X_conn,
320 n_neighbors,
(...)
326 local_connectivity=local_connectivity,
327 )
--> 328 connectivities = connectivities.tocsr().get()
329 distances = distances.get()
330 if key_added is None:
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupyx/scipy/sparse/_coo.py:534, in coo_matrix.tocsr(self, copy)
531 # copy is silently ignored (in line with SciPy) because both
532 # sum_duplicates and coosort change the underlying data
533 x = self.copy()
--> 534 x.sum_duplicates()
535 cusparse.coosort(x, 'r')
536 x = cusparse.coo2csr(x)
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupyx/scipy/sparse/_coo.py:405, in coo_matrix.sum_duplicates(self)
402 src_col = self.col[order]
403 diff = self._sum_duplicates_diff(src_row, src_col, size=self.row.size)
--> 405 if diff[1:].all():
406 # All elements have different indices.
407 data = src_data
408 row = src_row
File cupy/_core/core.pyx:1173, in cupy._core.core._ndarray_base.all()
File cupy/_core/core.pyx:1175, in cupy._core.core._ndarray_base.all()
File cupy/_core/_routines_logic.pyx:8, in cupy._core._routines_logic._ndarray_all()
File cupy/_core/_reduction.pyx:618, in cupy._core._reduction._SimpleReductionKernel.__call__()
File cupy/_core/_reduction.pyx:370, in cupy._core._reduction._AbstractReductionKernel._call()
File cupy/_core/_cub_reduction.pyx:689, in cupy._core._cub_reduction._try_to_call_cub_reduction()
File cupy/_core/_cub_reduction.pyx:526, in cupy._core._cub_reduction._launch_cub()
File cupy/_core/_cub_reduction.pyx:461, in cupy._core._cub_reduction._cub_two_pass_launch()
File cupy/_util.pyx:64, in cupy._util.memoize.decorator.ret()
File cupy/_core/_cub_reduction.pyx:240, in cupy._core._cub_reduction._SimpleCubReductionKernel_get_cached_function()
File cupy/_core/_cub_reduction.pyx:223, in cupy._core._cub_reduction._create_cub_reduction_function()
File cupy/_core/core.pyx:2258, in cupy._core.core.compile_with_cache()
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:484, in _compile_module_with_cache(source, options, arch, cache_dir, extra_source, backend, enable_cooperative_groups, name_expressions, log_stream, jitify)
480 return _compile_with_cache_hip(
481 source, options, arch, cache_dir, extra_source, backend,
482 name_expressions, log_stream, cache_in_memory)
483 else:
--> 484 return _compile_with_cache_cuda(
485 source, options, arch, cache_dir, extra_source, backend,
486 enable_cooperative_groups, name_expressions, log_stream,
487 cache_in_memory, jitify)
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:562, in _compile_with_cache_cuda(source, options, arch, cache_dir, extra_source, backend, enable_cooperative_groups, name_expressions, log_stream, cache_in_memory, jitify)
560 if backend == 'nvrtc':
561 cu_name = '' if cache_in_memory else name + '.cu'
--> 562 ptx, mapping = compile_using_nvrtc(
563 source, options, arch, cu_name, name_expressions,
564 log_stream, cache_in_memory, jitify)
565 if _is_cudadevrt_needed(options):
566 # for separate compilation
567 ls = function.LinkState()
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:319, in compile_using_nvrtc(source, options, arch, filename, name_expressions, log_stream, cache_in_memory, jitify)
316 with open(cu_path, 'w') as cu_file:
317 cu_file.write(source)
--> 319 return _compile(source, options, cu_path,
320 name_expressions, log_stream, jitify)
321 else:
322 cu_path = '' if not jitify else filename
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:290, in compile_using_nvrtc.<locals>._compile(source, options, cu_path, name_expressions, log_stream, jitify)
287 method = 'ptx'
289 if jitify:
--> 290 options, headers, include_names = _jitify_prep(
291 source, options, cu_path)
292 else:
293 headers = include_names = ()
File /oak/stanford/groups/pritch/users/emma/miniforge3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/compiler.py:233, in _jitify_prep(source, options, cu_path)
231 if not _jitify_header_source_map_populated:
232 from cupy._core import core
--> 233 jitify._init_module()
234 jitify._add_sources(core._get_header_source_map())
235 _jitify_header_source_map_populated = True
File cupy/cuda/jitify.pyx:220, in cupy.cuda.jitify._init_module()
File cupy/cuda/jitify.pyx:244, in cupy.cuda.jitify._init_module()
File cupy/cuda/jitify.pyx:217, in cupy.cuda.jitify._init_cupy_headers()
File cupy/cuda/jitify.pyx:200, in cupy.cuda.jitify._init_cupy_headers_from_scratch()
File cupy/cuda/jitify.pyx:275, in cupy.cuda.jitify.jitify()
RuntimeError: Runtime compilation failed Environment info from scanpy.logging -----
anndata 0.10.8
scanpy 1.10.2
-----
PIL 10.3.0
anyio NA
arrow 1.3.0
asttokens NA
attr 23.2.0
attrs 23.2.0
babel 2.14.0
brotli 1.1.0
cachetools 5.3.3
certifi 2024.06.02
cffi 1.16.0
charset_normalizer 3.3.2
cloudpickle 3.0.0
colorama 0.4.6
comm 0.2.2
cuda 11.8.3
cudf 24.06.01
cuml 24.06.01
cupy 13.2.0
cupy_backends NA
cupyx NA
cycler 0.12.1
cython_runtime NA
cytoolz 0.12.3
dask 2024.5.1
dask_cudf 24.06.01
dask_expr 1.1.1
dateutil 2.9.0
debugpy 1.8.2
decorator 5.1.1
decoupler 1.7.0
defusedxml 0.7.1
executing 2.0.1
fastjsonschema NA
fastrlock 0.8.2
fqdn NA
fsspec 2024.6.1
h5py 3.11.0
idna 3.7
igraph 0.11.5
ipykernel 6.29.5
isoduration NA
jedi 0.19.1
jinja2 3.1.4
joblib 1.4.2
json5 0.9.25
jsonpointer 3.0.0
jsonschema 4.22.0
jsonschema_specifications NA
jupyter_events 0.10.0
jupyter_server 2.14.1
jupyterlab_server 2.27.2
kiwisolver 1.4.5
legacy_api_wrap NA
leidenalg 0.10.2
llvmlite 0.43.0
louvain 0.8.2
lz4 4.3.3
markupsafe 2.1.5
matplotlib 3.8.4
mpl_toolkits NA
natsort 8.4.0
nbformat 5.10.4
numba 0.60.0
numpy 1.26.4
nvtx NA
overrides NA
packaging 24.1
pandas 2.2.2
parso 0.8.4
patsy 0.5.6
pickleshare 0.7.5
pkg_resources NA
platformdirs 4.2.2
prometheus_client NA
prompt_toolkit 3.0.47
psutil 6.0.0
ptxcompiler 0.8.1
pure_eval 0.2.2
pyarrow 16.1.0
pycparser 2.22
pydev_ipython NA
pydevconsole NA
pydevd 2.9.5
pydevd_file_utils NA
pydevd_plugins NA
pydevd_tracing NA
pygments 2.18.0
pylibraft 24.06.00
pyparsing 3.1.2
pythonjsonlogger NA
pytz 2024.1
rapids_dask_dependency NA
rapids_singlecell 0.10.6
referencing NA
requests 2.32.3
rfc3339_validator 0.1.4
rfc3986_validator 0.1.1
rmm 24.06.00
rpds NA
scipy 1.14.0
send2trash NA
session_info 1.0.0
six 1.16.0
sklearn 1.5.1
sniffio 1.3.1
socks 1.7.1
stack_data 0.6.2
statsmodels 0.14.2
tblib 3.0.0
texttable 1.7.0
threadpoolctl 3.5.0
tlz 0.12.3
toolz 0.12.1
tornado 6.4.1
tqdm 4.66.4
traitlets 5.14.3
treelite 4.1.2
typing_extensions NA
uri_template NA
urllib3 1.26.19
wcwidth 0.2.13
webcolors 24.6.0
websocket 1.8.0
yaml 6.0.1
zipp NA
zmq 26.0.3
-----
IPython 8.26.0
jupyter_client 8.6.2
jupyter_core 5.7.2
jupyterlab 4.2.3
-----
Python 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
Linux-3.10.0-1160.118.1.el7.x86_64-x86_64-with-glibc2.17
-----
Session information updated at 2024-07-05 12:12 It does look like an issue with |
can you try this as a minimal reproducer and if this crashed please report to
Please feel free to tag me or link this issue |
Hi, rapids_singlecell raised an error when running rsc.pp.highly_variable_genes. Could you please take a look at it?Below are the error information. Thanks in advance!
Traceback (most recent call last):
File "", line 1, in
File "/lustre2/zeminz_pkuhpc/lingxinnan/00.setting/source_scripts/scVis_ToolKit.py", line 450, in rsc_harmony_pipel
ine
rsc.pp.highly_variable_genes(adata, n_top_genes=HVGs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/rapids_single
cell/preprocessing/_hvg.py", line 159, in highly_variable_genes
_check_gpu_X(X)
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/rapids_single
cell/preprocessing/_utils.py", line 92, in _check_gpu_X
if X.has_canonical_format or not require_cf:
^^^^^^^^^^^^^^^^^^^^^^
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/cupyx/scipy/s
parse/_compressed.py", line 676, in __get_has_canonical_format
self._has_canonical_format = bool(is_canonical.all())
^^^^^^^^^^^^^^^^^^
File "cupy/_core/core.pyx", line 1169, in cupy._core.core._ndarray_base.all
File "cupy/_core/core.pyx", line 1171, in cupy._core.core._ndarray_base.all
File "cupy/_core/_routines_logic.pyx", line 8, in cupy._core._routines_logic._ndarray_all
File "cupy/_core/_reduction.pyx", line 618, in cupy._core._reduction._SimpleReductionKernel.call
File "cupy/_core/_reduction.pyx", line 370, in cupy._core._reduction._AbstractReductionKernel._call
File "cupy/_core/_cub_reduction.pyx", line 689, in cupy._core._cub_reduction._try_to_call_cub_reduction
File "cupy/_core/_cub_reduction.pyx", line 526, in cupy._core._cub_reduction._launch_cub
File "cupy/_core/_cub_reduction.pyx", line 461, in cupy._core._cub_reduction._cub_two_pass_launch
File "cupy/_core/_cub_reduction.pyx", line 240, in cupy._core._cub_reduction._SimpleCubReductionKernel_get_cached_f
unction
File "cupy/_core/_cub_reduction.pyx", line 223, in cupy._core._cub_reduction._create_cub_reduction_function
File "cupy/_core/core.pyx", line 2254, in cupy._core.core.compile_with_cache
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/com
piler.py", line 484, in _compile_module_with_cache
return _compile_with_cache_cuda(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/com
piler.py", line 562, in _compile_with_cache_cuda
ptx, mapping = compile_using_nvrtc(
^^^^^^^^^^^^^^^^^^^^
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/com
piler.py", line 319, in compile_using_nvrtc
return _compile(source, options, cu_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/com
piler.py", line 290, in _compile
options, headers, include_names = _jitify_prep(
^^^^^^^^^^^^^
File "/lustre1/zeminz_pkuhpc/lingxinnan/anaconda3/envs/rapids_singlecell/lib/python3.11/site-packages/cupy/cuda/com
piler.py", line 233, in _jitify_prep
jitify._init_module()
File "cupy/cuda/jitify.pyx", line 220, in cupy.cuda.jitify._init_module
File "cupy/cuda/jitify.pyx", line 244, in cupy.cuda.jitify._init_module
File "cupy/cuda/jitify.pyx", line 217, in cupy.cuda.jitify._init_cupy_headers
File "cupy/cuda/jitify.pyx", line 200, in cupy.cuda.jitify._init_cupy_headers_from_scratch
File "cupy/cuda/jitify.pyx", line 275, in cupy.cuda.jitify.jitify
RuntimeError: Runtime compilation failed
The text was updated successfully, but these errors were encountered: