-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GetOrganelle aborts with error ("Killed") #264
Comments
Update: Meanwhile, the other subjobs aborted as well with the same error message. |
Thanks for reaching out with the details. I agree it might be a RAM issue. It is hard to know the appropriate RAM before a run, especially for large datasets. Given this dataset and the log, a slightly larger RAM (e.g. 40G) could help surmount the index-making process. Unlike plants, it usually uses much less memory for animal_mt in downstream steps, but again there is no guarantee of 40G for downstream steps and all your jobs. If memory is not a big issue in your cluster, I recommend increasing the RAM request. Otherwise, see here to save memory consumption. PS: I've learned that there is quite room for RAM improvement without efficiency loss from the GetOrganelle development side. But I am not available to make a big release for this type of improvement recently. |
Thank you for your fast response. Indeed, increasing RAM did the trick. |
First check
Describe the bug
In making read indexes, the program exits with "13766 Killed". No further information is given. I am wondering whether this might be a RAM issue.
The following command was executed:
get_organelle_from_reads.py -1 $read1 -2 $read2 -o $dout/$subdout -R 40 -k 21,45,65,85,105,135 -F $seedDB -t 14 --config-dir ${dbloc}${seedDB} --overwrite --max-reads INF --reduce-reads-for-coverage 800 -J 1 -M 1
All_species_plastids_assembly.o10946194-6.txt
Additional context
GetOrganelle is containerized (Apptainer) and runs as part of an array job. Each subjob is provided 14 cores and 30 GB of RAM. So far, other subjobs have not aborted with this error and are running just fine.
The text was updated successfully, but these errors were encountered: