You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FASTQC Process Fails with Exit Code 140 in nf-core/sarek Pipeline Using Singularity:
Description of the bug:
The nf-core/sarek pipeline is consistently failing during the FASTQC process with exit code 140 when executed on a Slurm-based HPC cluster using Singularity. The same issue occurs when using Docker as the container runtime.
Additional context and observations:
The error occurs during the FASTQC process in both Singularity and Docker executions.
Other pipelines such as nf-core/rnaseq run without issues in the same environment.
Running the pipeline with root privileges also fails.
The warning Skipping mount /usr/local/var/singularity/mnt/session/etc/resolv.conf appears but may not be directly related to the issue.
Java I/O error java.io.IOException: Bad file descriptor suggests possible file handling issues within the container.
The error persists even when Singularity is correctly configured and verified with other workflows.
Request for assistance:
I am seeking help to resolve this issue with the FASTQC process in the nf-core/sarek pipeline. Any guidance on addressing the exit code 140 error would be greatly appreciated, particularly:
Is this a known issue with FASTQC in nf-core/sarek?
Could the java.io.IOException: Bad file descriptor indicate an underlying issue in the pipeline or the environment?
Are there specific settings or configurations required for running this pipeline with Singularity on SLURM?
I have attached the stdout final message and the stderr. The output logs show the message described ealier at the screenshot, and as for the .err, I am having a "missing txt file" which I do not recognize.
The job ran for 24 hours, with a couple of failed jobs, it still produced an output of 2 TB at the work directories.
Script Location: The entire script and more details are available on GitHub at this issue link.
The text was updated successfully, but these errors were encountered:
Description of the bug
FASTQC Process Fails with Exit Code 140 in nf-core/sarek Pipeline Using Singularity:
Description of the bug:
The
nf-core/sarek
pipeline is consistently failing during the FASTQC process with exit code140
when executed on a Slurm-based HPC cluster using Singularity. The same issue occurs when using Docker as the container runtime.Additional context and observations:
FASTQC
process in both Singularity and Docker executions.nf-core/rnaseq
run without issues in the same environment.Skipping mount /usr/local/var/singularity/mnt/session/etc/resolv.conf
appears but may not be directly related to the issue.java.io.IOException: Bad file descriptor
suggests possible file handling issues within the container.Request for assistance:
I am seeking help to resolve this issue with the FASTQC process in the nf-core/sarek pipeline. Any guidance on addressing the exit code 140 error would be greatly appreciated, particularly:
java.io.IOException: Bad file descriptor
indicate an underlying issue in the pipeline or the environment?Posted as well at the NF-CORE Slack channel - https://nfcore.slack.com/archives/CE6SDBX2A/p1728540986179659
Added command and terminal output and relevant files below.
Command used and terminal output
Command used and terminal output:
Command Executed:
Error Output:
Relevant files
Relevant files:
The following is the script used to launch the job (paths and personal information generalized for privacy):
System information
System information:
24.04.3
3.11.0
, Docker 24.0.73.2.3
I have attached the stdout final message and the stderr. The output logs show the message described ealier at the screenshot, and as for the .err, I am having a "missing txt file" which I do not recognize.
The job ran for 24 hours, with a couple of failed jobs, it still produced an output of 2 TB at the work directories.
Script Location: The entire script and more details are available on GitHub at this issue link.
The text was updated successfully, but these errors were encountered: