-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error running on test data #20
Comments
Hi @Rhinogradentia, this looks to me like a problem in downloading the singularity image rather than a problem with YAMP.
Let me know if this works! |
Hi @alesssia, Thanks a lot for the fast reply. I will try this and let you know if it was the solution. I think the download speed is quite slow at the moment on my cluster.
I think I found it Best, |
Correct! I am closing this issue, but please feel free to re-open it if needed! |
Hi @alesssia , I've downloaded/pulled the docker and singularity images like this:
stored it in a subfolder 'images' and set NXF_SINGULARITY_CACHEDIR
I already tried out several naming conventions for the images as you can see (by creating links). .nextflow.log
The path seems to work. So I think the naming is the problem, which naming convention is the pipeline looking for? Thanks again for any help! Best, |
I am not sure of the naming convention, but my images are called:
I cannot spot any error in your commands. Are you submitting YAMP to your job scheduler? Is this folder accessible to all computing nodes? You can also modify the
Please let me know if this works! |
Hi @alesssia, I tried to increase the singularity pullTimeout, but somehow this didn't have any effect. What brought me a step further was to add the container-path to processes in the test.config file.
Thanks a lot for your help |
Let me know how it goes! |
Hi @alesssia The last problem came from the metaphlan db versions - after downloading them again it vanished. But, now I get this:
I found this https://forum.qiime2.org/t/errno-17-file-exists-during-classify/6554 where they assume it has something to do with access/write rights and I think this concerns the container. After adding some debugging lines I assume it is the first qiime line (tools import) which throws this error. Thank you a lot |
Hi @Rhinogradentia, I never had this error. I have also asked other users I know, and this is a very new thing. I see that you are using a slurm executor, are you using the singularity container(s)? |
Yes, I'm using singularity container - pre-downloaded. I started over again without resume, followed by the same error as above. If I afterwards shell into the container and execute the commands from .command.run everything seems to work fine. Excecuting .command.sh alone inside the container also works:
Executing .command.sh via exec with the container works besides the last shell script:
Even outside the container when running .command.run it seems to work:
I searched for this error again and found another person with this problem with stand-alone qiime (without any workflow): https://forum.qiime2.org/t/error-when-executing-qiime-tools-import-script-on-a-server/7790 As suggested there I tried it with a manually defined tmpdir and this worked. So if anyone also stumbles upon this - it seemingly has something to do with the server/cluster network and inconsistencies there. Workaround - create a local tmpdir and export it accordingly before running qiime2.
After this, the test-data workflow ran smoothly. Thank you a lot for your support and help. Best, |
Hi @Rhinogradentia , I wanted to included this on the wiki troubleshooting page (linking to this issue and acknowledging the fact that you found the solution, of course!) and I just wanted to confirm that you just exported the tmpdir before running YAMP? No luck on my side on generating this issue... Thanks a lot, |
Hi @alesssia, no problem :-) And yes - I can confirm that I just created a local tmp directory and exported it - this solved the qiime error. I hope this can be helpful or someone else. Best, |
Great, thanks a lot! |
Hi Hannah, I think this is one process earlier in the pipeline and not the same error. Take a look in /nfs/users/rg/hbenisty/obbs_yamp/work/85/bfa6cda091b13a195246185ae93b4f/.command.log - maybe there is a better explanation for this error. Regarding the tmp directory. I just created it in the same location I initiated the pipeline. |
Hi Nadine,
Thank you for your message. I think I solved the previous error. However now I have this one below. Would you have any recommendation? Many thanks!
[66/17aaba] Submitted process > quality_assessment (paired_end_complete)
[e5/552f8b] Submitted process > index_foreign_genome (1)
[c9/2aa2da] Submitted process > dedup (paired_end_complete)
[93/75e6b4] Submitted process > get_software_versions
Error executing process > 'get_software_versions'
Caused by:
Process `get_software_versions` terminated with an error exit status (255)
Command executed:
echo 0.9.5.3 > v_pipeline.txt
echo 22.10.7 > v_nextflow.txt
echo quay.io/biocontainers/fastqc:0.11.9--0 | cut -d: -f 2 > v_fastqc.txt
echo quay.io/biocontainers/bbmap:38.87--h1296035_0 | cut -d: -f 2 > v_bbmap.txt
metaphlan --version > v_metaphlan.txt
humann --version > v_humann.txt
echo qiime2/core:2020.8 | cut -d: -f 2 > v_qiime.txt
echo quay.io/biocontainers/multiqc:1.9--py_1 | cut -d: -f 2 > v_multiqc.txt
scrape_software_versions.py > software_versions_mqc.yaml
Command exit status:
255
Command output:
(empty)
Command error:
[91mERROR : Unknown image format/type: /nfs/users/rg/hbenisty/obbs_yamp_copy/work/singularity/biobakery-workflows-3.0.0.a.6.metaphlanv3.0.7.img
[0m[31mABORT : Retval = 255
[0m
Work dir:
/nfs/users/rg/hbenisty/obbs_yamp_copy/work/93/75e6b47b4fed6d864e04a7ef37795f
Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
Execution cancelled -- Finishing pending tasks before exit
[c9/2aa2da] NOTE: Process `dedup (paired_end_complete)` terminated with an error exit status (140) -- Execution is retried (1)
From: Nadine ***@***.***>
Date: Monday, 6 March 2023 at 15:41
To: alesssia/YAMP ***@***.***>
Cc: Hannah Benisty ***@***.***>, Comment ***@***.***>
Subject: Re: [alesssia/YAMP] Error running on test data (#20)
Hi Hannah,
I think this is one process earlier in the pipeline and not the same error.
Best,
Nadine
—
Reply to this email directly, view it on GitHub<https://urldefense.com/v3/__https:/github.com/alesssia/YAMP/issues/20*issuecomment-1456264524__;Iw!!D9dNQwwGXtA!SbjTP32dKlzo6lUfGg5uvuhUSqMgAl1ufJSUFaFRVfKwSiaVH5CKf57-_66k4P-HNZFjDuj3JlXpfc62vbIk3tp_HJqA$>, or unsubscribe<https://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/AWSPIM3DRIVZ7UWSL4NQ2ITW2XZRPANCNFSM43GEPCPA__;!!D9dNQwwGXtA!SbjTP32dKlzo6lUfGg5uvuhUSqMgAl1ufJSUFaFRVfKwSiaVH5CKf57-_66k4P-HNZFjDuj3JlXpfc62vbIk3thRigvp$>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Hi Hannah, I never had this error. But from looking at the error message I would say the biobakery image can't be executed or is not recognized as a valid image. Check if it is actually available, readable and executable.
Again - take a look at the .command files in /nfs/users/rg/hbenisty/obbs_yamp_copy/work/93/75e6b47b4fed6d864e04a7ef37795f to get more information on the error. You can also try to execute the command (.command.sh) which threw the error inside the container manually to see what happens. Best, |
Hi,
I'm trying to run the provided testdata with YAMP on a slurm managed HPC with singularity and running into follwing error I don't really understand right now.
command.log
command.err
command.sh
I've already increased the run time to 90m because the pipeline had a timeout on the dedup process with 15m.
Maybe there is an issue with the downloads? I could use some singularity containers or modules of my own for most of the tools you utilize, would this be possible to do? Or is it possible to preload the images manually?
When running this with -resume I already managed to finish the first step (software versions).
Any help highly appreciated.
Best,
Nadine
EDIT:
And I'm not sure if the qiime link is correct? See above?
The text was updated successfully, but these errors were encountered: