-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not quite understanding how to use this #46
Comments
Hi, upon installation you configured four options (as defined in cookiecutter.json):
Without profile, you would submit cluster jobs as snakemake --cluster "sbatch --account account --output logs/slurm-%j.out --error logs/slurm-%j.err -t 12:00:00 ..." -j 1 jobname --cluster-config cluster-config With profile, you would save typing, doing where the sbatch_defaults option above would be passed to the sbatch call, and the cluster configuration file set in the cluster_config option would be used. In addition, the profile uses the slurm_status.py script to check for job status, the main benefit being that jobs that timeout will be caught as failed, something that does not happen when you submit without a profile. Also, if you in your rule add resources such as "runtime", these will be parsed and added to the sbatch call. As of snakemake 5.15, resources can now be strings, which means cluster configuration files could be eliminated entirely, although I haven't had time to look into this yet. I hope this helps. If there is something I can do to improve the README, please let me know. Cheers, Per |
That is useful. Thank you. I guess what I needed was, as you have just given, some example of what to do and recommended practice of where to put what. The aspect of I did find another webpage that describes how to use profiles: https://www.sichong.site/2020/02/25/snakemake-and-slurm-how-to-manage-workflow-with-resource-constraint-on-hpc/. I've linked it in case it's useful to other readers. |
I just want to report that a hybrid approach currently works best for me:
Within Maybe use of |
@proteins247 I was able to replace the features of |
I agree this is a super helpful blog post that demonstrates how to use |
I have read through the README and installed this profile, but I still don't quite understand how it works. I am definitely a novice to snakemake, and I've gone through the snakemake tutorial. A key sticking point right now is running a workflow on my SLURM-managed HPC cluster. Particularly, I don't understand how to configure this profile, and the description in the README.md is not clear to me.
I think I understand how to run a workflow on SLURM using cluster-config. The NIH cluster has a webpage that provides examples as well: snakemake. I see that
cluster-config
deprecated however.When I installed the profile, I did not configure any of the cookiecutter options, not fully understanding them at the time. I can see that
config.yaml
,cluster: "slurm-submit.py"
is analogous to the--cluster
commandline argument for snakemake. I can see that the purpose ofslurm-submit.py
is to generate the final sbatch command and call it. I am not sure whatslurm-jobscript.sh
does however.Is the user supposed to edit
config.yaml
with the particulars of the user's cluster such as partition and time limit? It seems to me thatconfig.yaml
should not be edited (since it serves to connect the various parts of this profile, and inadvertent editing could break it). So, should I should create a separate file (either json or yaml) that possibly should be specified byCLUSTER_CONFIG
inslurm-submit.py
?I think, for my first production workflow, I'd like a fine control over how each step is submitted to the cluster, just so I know what's going on. That basically means I would have a workdir-specific
cluster.json
file that I specify viacluster-config
?It's possible I'm overcomplicating things and worrying unnecessarily about
cluster-config
being deprecated. Am I right in my understanding that I could dosnakemake --profile slurm --cluster-config my_job_config.json
, and things should work?The text was updated successfully, but these errors were encountered: