-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: 'weights_only' is an invalid keyword argument for Unpickler() #24
Comments
Greetings @Wan727 , my first suggestion is to upgrade If that doesn't work (or isn't an option), you could also try running it using the docker container (which can also be run using singularity). |
Thank you for your answer.Because I couldn't change the toch version, but I modified one of the parameters of the original function, the "weights_only "parameter in nn_model.py,But a new error was reported when using the trained model tessellation to my scRNA data,I don’t know if the error is reported because of this. RecursionError Traceback (most recent call last) File /home/SCimilarity/scimilarity/src/scimilarity/cell_embedding.py:156, in CellEmbedding.get_embeddings(self, X, num_cells, buffer_size) RecursionError: maximum recursion depth exceeded while calling a Python object |
Hello. Are you able to install conda and put scimilarity in its own environment? I think that will allow you to have the dependencies at the updated versions. |
This is what I do. I followed the tutorial website: https://genentech.github.io/scimilarity/install.html and used my own conda environment to download and install dependencies using git clone. |
That error looks like a numpy error.
|
The numpy version I use is: numpy-1.26.4. May I ask which version of numpy the developers are using and the corresponding dependencies? I hope the developers can provide the requirement.txt file. Then I used my own data and the h5ad data you provided, but the same error occurred. This made me confused. Maybe it was a version problem, so I urgently needed an environment. By the way, my cuda version is 11.4. I don’t know if it is suitable. It's really a headache |
The requirements are:
numpy 1.26.4 will work but your environment is strange as a new install should install numpy 2.0.2. Here is a yaml I've used to create a conda environment:
Then you can install scimilarity in it via pip:
|
thank you for your answer. It seems to be the reason for numpy, it has been successfully run through. Then I want to ask, when will the tutorial on this training data be made public? I can’t wait. |
First of all, I am very excited to learn such a powerful model.First attempt to execute
cq = CellQuery(model_path)
cell_query.py:45, in CellQuery.init(self, model_path, use_gpu, filenames, metadata_tiledb_uri, embedding_tiledb_uri, load_knn)
42 import pandas as pd
43 import tiledb
---> 45 super().init(
46 model_path=model_path,
47 use_gpu=use_gpu,
48 )
50 self.cellsearch_path = os.path.join(model_path, "cellsearch")
51 os.makedirs(self.cellsearch_path, exist_ok=True)
cell_search_knn.py:28, in CellSearchKNN.init(self, model_path, use_gpu)
9 def init(
10 self,
11 model_path: str,
12 use_gpu: bool = False,
13 ):
14 """Constructor.
15
16 Parameters
(...)
25 >>> cs = CellSearchKNN(model_path="/opt/data/model")
26 """
---> 28 super().init(
29 model_path=model_path,
30 use_gpu=use_gpu,
31 )
33 self.knn = None
34 self.safelist = None
cell_embedding.py:67, in CellEmbedding.init(self, model_path, use_gpu)
65 if self.use_gpu is True:
66 self.model.cuda()
---> 67 self.model.load_state(self.filenames["model"])
68 self.model.eval()
70 self.int2label = pd.read_csv(
71 os.path.join(self.model_path, "label_ints.csv"), index_col=0
72 )["0"].to_dict()
nn_models.py:116, in Encoder.load_state(self, filename, use_gpu)
105 """Load model state.
106
107 Parameters
(...)
112 Boolean indicating whether or not to use GPUs.
113 """
115 if not use_gpu:
--> 116 ckpt = torch.load(
117 filename, map_location=torch.device("cpu"), weights_only=False
118 )
119 else:
120 ckpt = torch.load(filename, weights_only=False)
serialization.py:712, in load(f, map_location, pickle_module, **pickle_load_args)
710 opened_file.seek(orig_position)
711 return torch.jit.load(opened_file)
--> 712 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
713 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
torch/serialization.py:1047, in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
1044 # Load the data (which may in turn use
persistent_load
to load tensors)1045 data_file = io.BytesIO(zip_file.get_record(pickle_file))
-> 1047 unpickler = UnpicklerWrapper(data_file, **pickle_load_args)
1048 unpickler.persistent_load = persistent_load
1049 result = unpickler.load()
TypeError: 'weights_only' is an invalid keyword argument for Unpickler()
The text was updated successfully, but these errors were encountered: