-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wfmash step to speed up #205
Comments
Dear @OZTaekOppa, Per default, wfmash indeed only makes use of one node. However, there is a parameter called Just to be clear about wfmash again, when
I hope this answers your question! |
I didn't test it for one vs. all, but it should work out the same way. |
This question is also discussed at pangenome/pggb#403. |
Hi @subwaystation, Thank you for your prompt reply. Cheers, Taek |
Hi @subwaystation, The current single-node approach requires significant RAM, CPUs, and extended walltime. The HPC team is exploring alternative solutions to run parallel jobs across multiple nodes. From testing a small dataset, both the all-vs-all and one-vs-all approaches produced the same outcome. Currently, I am working with the team to optimize the partition and PGGB steps for Nextflow. Cheers, Taek |
I am a little bit confused. There is an option to directly run wfmash across several nodes, as stated above. Else I am curious, how your plans will turn out :) |
Description of feature
Dear nf-core & pangenome team,
I have a few questions about your great program.
Based on the link (https://github.com/nf-core/pangenome/blob/1.0.0/modules/nf-core/wfmash/main.nf), it appears that wfmash performs all-vs-all alignment on a single node.
From my trials, this is indeed the case.
I am trying to speed up the wfmash process on multiple nodes (PBSpro) by running parallel jobs. My idea is to perform one-vs-all alignments for each node from an input full genome dataset (120 human pangenomes), and then merge the results into a single paf file for further analysis.
Looking forward to your insights.
Kind regards,
Taek
The text was updated successfully, but these errors were encountered: