You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using this pipeline on our HPC, createBigWig will always fail due to filling up it's TEMP directory.
It writes to /tmp which is very small (10 M) on the worker nodes. Rather, it should be writing to /scratch.
This is specified in nextflow config process.scratch=/scratch, passed with envWhitelist='TMP' where TMP is specified as TMP=/scratch on the submit node, -Djava.io.tmpdir=/scratch when launching nextflow, if there's anyplace else to try I'd be happy to.
This may be the same sort of error happening here #82 with regards to TMPDIR behaviour.
The resulting error is listed here:
ERROR ~ Error executing process > 'createBigWig (RNA3-Aligned.)'
Caused by:
Process `createBigWig (RNA3-Aligned.)` terminated with an error exit status (1)
Command executed:
samtools index RNA3-Aligned.sortedByCoord.out.bam
bamCoverage -b RNA3-Aligned.sortedByCoord.out.bam -p 10 -o RNA3-Aligned.sortedByCoord.out.bigwig
Command exit status:
1
Command output:
(empty)
Command error:
minFragmentLength: 0
verbose: False
out_file_for_raw_data: None
numberOfSamples: None
bedFile: None
bamFilesList: ['RNA3-Aligned.sortedByCoord.out.bam']
numberOfProcessors: 10
samFlag_exclude: None
save_data: False
stepSize: 50
smoothLength: None
blackListFileName: None
center_read: False
ignoreDuplicates: False
defaultFragmentLength: read length
chrsToSkip: []
region: None
maxPairedFragmentLength: 1000
samFlag_include: None
binLength: 50
maxFragmentLength: 0
minMappingQuality: None
zerosToNans: False
Traceback (most recent call last):
File "/opt/conda/envs/nf-core-rnaseq-1.1/bin/bamCoverage", line 12, in <module>
main(args)
File "/opt/conda/envs/nf-core-rnaseq-1.1/lib/python2.7/site-packages/deeptools/bamCoverage.py", line 256, in main
format=args.outFileFormat, smoothLength=args.smoothLength)
File "/opt/conda/envs/nf-core-rnaseq-1.1/lib/python2.7/site-packages/deeptools/writeBedGraph.py", line 152, in run
numberOfProcessors=self.numberOfProcessors)
File "/opt/conda/envs/nf-core-rnaseq-1.1/lib/python2.7/site-packages/deeptools/mapReduce.py", line 142, in mapReduce
res = pool.map_async(func, TASKS).get(9999999)
File "/opt/conda/envs/nf-core-rnaseq-1.1/lib/python2.7/multiprocessing/pool.py", line 572, in get
raise self._value
IOError: [Errno 28] No space left on device
Work dir:
/hpc/home/coetzeesg/work/a0/2b68269730119025498ad222dd65c0
Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`
-- Check '.nextflow.log' file for details
[nfcore/rnaseq] Pipeline Complete
WARN: Killing pending tasks (2)
The text was updated successfully, but these errors were encountered:
So, it looks to me as though the problem is that deeptools does not respect, in my specific instance at least, $TMP or $TMPDIR, but rather is putting things in /tmp or /var/tmp when /tmp is full. $TEMP is not passed into singularity at all. When $TEMP is assigned in the main.nf file, I believe deeptools does use it however. My work around for this is to add runOptions = '-B /scratch:/tmp' to the singularity section of the nextflow config, so that badly behaving apps can write to what they think is /tmp but is in reality the directory that I want them to use.
When using this pipeline on our HPC, createBigWig will always fail due to filling up it's TEMP directory.
It writes to /tmp which is very small (10 M) on the worker nodes. Rather, it should be writing to /scratch.
This is specified in nextflow config
process.scratch=/scratch
, passed withenvWhitelist='TMP'
where TMP is specified asTMP=/scratch
on the submit node,-Djava.io.tmpdir=/scratch
when launching nextflow, if there's anyplace else to try I'd be happy to.This may be the same sort of error happening here #82 with regards to TMPDIR behaviour.
The resulting error is listed here:
The text was updated successfully, but these errors were encountered: