You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
License permits unrestricted use (educational + commercial)
Please also take note of the reviewer guidelines below to facilitate a smooth review process.
I have difficulties writing tests for this workflow. Ideally I would like to write a test that checks the created bins, but bin IDs (so the name of the fasta files in dereplicated_genomes) are randomly assigned, so the names are different every time I run the workflow. Is it possible to check e.g. the number of elements on the collection, or some other loose test. I could not find any details for test docs.
If not, is it ok to check only the mutliQC report, if any of the workflow steps does not work, the report will change significantly and the test fail.
2025-03-26 13:02:35 - MEGAHIT v1.2.92025-03-26 13:02:35 - Using megahit_core with POPCNT and BMI2 support2025-03-26 13:02:35 - Convert reads to binary library2025-03-26 13:02:35 - b'INFO sequence/io/sequence_lib.cpp : 75 - Lib 0 (/tmp/tmpunau54_g/files/6/1/8/dataset_618900d2-82f4-478e-b4de-59e697c3bf37.dat,/tmp/tmpunau54_g/files/7/3/9/dataset_739b969b-6d3d-4163-8939-56482ad4c124.dat): pe, 18924 reads, 150 max length'2025-03-26 13:02:35 - b'INFO utils/utils.h : 152 - Real: 0.0419\tuser: 0.0412\tsys: 0.0020\tmaxrss: 20616'2025-03-26 13:02:35 - Start assembly. Number of CPU threads 1 2025-03-26 13:02:35 - k list: 21,29,39,59,79,99,119,141 2025-03-26 13:02:35 - Memory used: 150900903932025-03-26 13:02:35 - Extract solid (k+1)-mers for k = 21 2025-03-26 13:02:36 - Build graph for k = 21 2025-03-26 13:02:36 - Assemble contigs from SdBG for k = 212025-03-26 13:02:37 - Local assembly for k = 212025-03-26 13:02:38 - Extract iterative edges from k = 21 to 29 2025-03-26 13:02:38 - Build graph for k = 29 2025-03-26 13:02:38 - Assemble contigs from SdBG for k = 292025-03-26 13:02:38 - Local assembly for k = 292025-03-26 13:02:39 - Extract iterative edges from k = 29 to 39 2025-03-26 13:02:39 - Build graph for k = 39 2025-03-26 13:02:39 - Assemble contigs from SdBG for k = 392025-03-26 13:02:40 - Local assembly for k = 392025-03-26 13:02:41 - Extract iterative edges from k = 39 to 59 2025-03-26 13:02:41 - Build graph for k = 59 2025-03-26 13:02:41 - Assemble contigs from SdBG for k = 592025-03-26 13:02:41 - Local assembly for k = 592025-03-26 13:02:42 - Extract iterative edges from k = 59 to 79 2025-03-26 13:02:42 - Build graph for k = 79 2025-03-26 13:02:42 - Assemble contigs from SdBG for k = 792025-03-26 13:02:42 - Local assembly for k = 792025-03-26 13:02:43 - Extract iterative edges from k = 79 to 99 2025-03-26 13:02:43 - Build graph for k = 99 2025-03-26 13:02:43 - Assemble contigs from SdBG for k = 992025-03-26 13:02:43 - Local assembly for k = 992025-03-26 13:02:43 - Extract iterative edges from k = 99 to 119 2025-03-26 13:02:43 - Build graph for k = 119 2025-03-26 13:02:44 - Assemble contigs from SdBG for k = 1192025-03-26 13:02:44 - Local assembly for k = 1192025-03-26 13:02:44 - Extract iterative edges from k = 119 to 141 2025-03-26 13:02:44 - Build graph for k = 141 2025-03-26 13:02:44 - Assemble contigs from SdBG for k = 1412025-03-26 13:02:45 - Merging to output final contigs 2025-03-26 13:02:45 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp2025-03-26 13:02:45 - ALL DONE. Time elapsed: 9.625468 seconds
echo 50contig_reads && ln -s '/tmp/tmpunau54_g/files/6/1/8/dataset_618900d2-82f4-478e-b4de-59e697c3bf37.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmpunau54_g/files/7/3/9/dataset_739b969b-6d3d-4163-8939-56482ad4c124.dat' 'pe2-50contig_reads.fastqsanger.gz' && metaquast --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmpunau54_g/job_working_directory/000/11/outputs/dataset_9988ba27-427c-42d6-b301-ca3a0bf3f8e1_files' && cp outputdir/combined_reference/*.html '/tmp/tmpunau54_g/job_working_directory/000/11/outputs/dataset_9988ba27-427c-42d6-b301-ca3a0bf3f8e1_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmpunau54_g/job_working_directory/000/11/outputs/dataset_9988ba27-427c-42d6-b301-ca3a0bf3f8e1_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads/usr/local/opt/quast-5.3.0/metaquast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --labels 50contig_reads -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-03-26 13:03:39Logging to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat ==> 50contig_readsNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat -o /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir --labels 50contig_readsVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-03-26 13:03:40Logging to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmpunau54_g/job_working_directory/000/11/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat ==> 50contig_reads2025-03-26 13:03:40Running Reads analyzer...NOTICE: Permission denied accessing /usr/local/lib/python3.12/site-packages/quast_libs/gridss. GRIDSS will be downloaded to home directory /tmp/tmpunau54_g/job_working_directory/000/11/home/.quastDownloading gridss (file: gridss-1.4.1.jar)... 0.0% of 38935087 bytes 1.0% of 38935087 bytes 2.0% of 38935087 bytes 3.0% of 38935087 bytes 4.0% of 38935087 bytes 5.0% of 38935087 bytes 6.0% of 38935087 bytes 7.0% of 38935087 bytes 8.0% of 38935087 bytes 9.0% of 38935087 bytes 10.0% of 38935087 bytes 11.0% of 38935087 bytes 12.0% of 38935087 bytes 13.0% of 38935087 bytes 14.0% of 38935087 bytes 15.0% of 38935087 bytes 16.0% of 38935087 bytes 17.0% of 38935087 bytes 18.0% of 38935087 bytes 19.0% of 38935087 bytes 20.0% of 38935087 bytes 21.0% of 38935087 bytes 22.0% of 38935087 bytes 23.0% of 38935087 bytes 24.0% of 38935087 bytes 25.0% of 38935087 bytes 26.0% of 38935087 bytes 27.0% of 38935087 bytes 28.0% of 38935087 bytes 29.0% of 38935087 bytes 30.0% of 38935087 bytes 31.0% of 38935087 bytes 32.0% of 38935087 bytes 33.0% of 38935087 bytes 34.0% of 38935087 bytes 35.0% of 38935087 bytes 36.0% of 38935087 bytes 37.0% of 38935087 bytes 38.0% of 38935087 bytes 39.0% of 38935087 bytes 40.0% of 38935087 bytes 41.0% of 38935087 bytes 42.0% of 38935087 bytes 43.0% of 38935087 bytes 44.0% of 38935087 bytes 45.0% of 38935087 bytes 46.0% of 38935087 bytes 47.0% of 38935087 bytes 48.0% of 38935087 bytes 49.0% of 38935087 bytes 50.0% of 38935087 bytes 51.0% of 38935087 bytes 52.0% of 38935087 bytes 53.0% of 38935087 bytes 54.0% of 38935087 bytes 55.0% of 38935087 bytes 56.0% of 38935087 bytes 57.0% of 38935087 bytes 58.0% of 38935087 bytes 59.0% of 38935087 bytes 60.0% of 38935087 bytes 61.0% of 38935087 bytes 62.0% of 38935087 bytes 63.0% of 38935087 bytes 64.0% of 38935087 bytes 65.0% of 38935087 bytes 66.0% of 38935087 bytes 67.0% of 38935087 bytes 68.0% of 38935087 bytes 69.0% of 38935087 bytes 70.0% of 38935087 bytes 71.0% of 38935087 bytes 72.0% of 38935087 bytes 73.0% of 38935087 bytes 74.0% of 38935087 bytes 75.0% of 38935087 bytes 76.0% of 38935087 bytes 77.0% of 38935087 bytes 78.0% of 38935087 bytes 79.0% of 38935087 bytes 80.0% of 38935087 bytes 81.0% of 38935087 bytes 82.0% of 38935087 bytes 83.0% of 38935087 bytes 84.0% of 38935087 bytes 85.0% of 38935087 bytes 86.0% of 38935087 bytes 87.0% of 38935087 bytes 88.0% of 38935087 bytes 88.0% of 38935087 bytes 89.0% of 38935087 bytes 90.0% of 38935087 bytes 91.0% of 38935087 bytes 92.0% of 38935087 bytes 93.0% of 38935087 bytes 94.0% of 38935087 bytes 95.0% of 38935087 bytes 96.0% of 38935087 bytes 97.0% of 38935087 bytes 98.0% of 38935087 bytes 99.0% of 38935087 bytesgridss successfully downloaded! Logging to files /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.log and /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.err... Pre-processing reads... Running BWA... Done. Sorting SAM-file... Analysis is finished. Creating total report... saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/reads_stats/reads_report.txt, reads_report.tsv, and reads_report.texDone.2025-03-26 13:03:45Running Basic statistics processor... Contig files: 50contig_reads Calculating N50 and L50... 50contig_reads, N50 = 4877, L50 = 22, auN = 5187.8, Total length = 265405, GC % = 36.45, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads GC content plot... saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/50contig_reads_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-03-26 13:03:46Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-03-26 13:03:47RESULTS: Text versions of total report are saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/icarus.html Log is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/quast.logFinished: 2025-03-26 13:03:47Elapsed time: 0:00:06.934427NOTICEs: 3; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
Building a SMALL indexRenaming genome.3.bt2.tmp to genome.3.bt2Renaming genome.4.bt2.tmp to genome.4.bt2Renaming genome.1.bt2.tmp to genome.1.bt2Renaming genome.2.bt2.tmp to genome.2.bt2Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt29462 reads; of these: 9462 (100.00%) were paired; of these: 90 (0.95%) aligned concordantly 0 times 9300 (98.29%) aligned concordantly exactly 1 time 72 (0.76%) aligned concordantly >1 times ---- 90 pairs aligned concordantly 0 times; of these: 8 (8.89%) aligned discordantly 1 time ---- 82 pairs aligned 0 times concordantly or discordantly; of these: 164 mates make up the pairs; of these: 93 (56.71%) aligned 0 times 70 (42.68%) aligned exactly 1 time 1 (0.61%) aligned >1 times99.51% overall alignment rate
Standard Output:
Settings: Output files: "genome.*.bt2" Line rate: 6 (line is 64 bytes) Lines per side: 1 (side is 64 bytes) Offset rate: 4 (one in 16) FTable chars: 10 Strings: unpacked Max bucket size: default Max bucket size, sqrt multiplier: default Max bucket size, len divisor: 4 Difference-cover sample period: 1024 Endianness: little Actual local endianness: little Sanity checking: disabled Assertions: disabled Random seed: 0 Sizeofs: void*:8, int:4, long:8, size_t:8Input files DNA, FASTA: /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.datReading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 6; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 38016.9 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 7 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 18323 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 18324 for bucket 1Getting block 2 of 7 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 49606 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 49607 for bucket 2Getting block 3 of 7 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 45151 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 45152 for bucket 3Getting block 4 of 7 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 49787 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 49788 for bucket 4Getting block 5 of 7 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 28638 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 28639 for bucket 5Getting block 6 of 7 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 43194 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 43195 for bucket 6Getting block 7 of 7 Reserving size (49899) for bucket 7 Calculating Z arrays for bucket 7 Entering block accumulator loop for bucket 7: bucket 7: 10% bucket 7: 20% bucket 7: 30% bucket 7: 40% bucket 7: 50% bucket 7: 60% bucket 7: 70% bucket 7: 80% bucket 7: 90% bucket 7: 100% Sorting block of length 31419 for bucket 7 (Using difference cover) Sorting block time: 00:00:00Returning block of 31420 for bucket 7Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 0Total time for call to driver() for forward index: 00:00:00Reading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00 Time to reverse reference sequence: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 7; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 44353.2 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 6 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 47687 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 47688 for bucket 1Getting block 2 of 6 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 36636 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 36637 for bucket 2Getting block 3 of 6 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 49027 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 49028 for bucket 3Getting block 4 of 6 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 37449 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 37450 for bucket 4Getting block 5 of 6 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 47142 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 47143 for bucket 5Getting block 6 of 6 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 48178 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 48179 for bucket 6Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 1Total time for backward call to driver() for mirror index: 00:00:01
2025-03-26 13:12:53 - MEGAHIT v1.2.92025-03-26 13:12:53 - Using megahit_core with POPCNT and BMI2 support2025-03-26 13:12:53 - Convert reads to binary library2025-03-26 13:12:53 - b'INFO sequence/io/sequence_lib.cpp : 75 - Lib 0 (/tmp/tmp4ato2skh/files/5/2/3/dataset_523a5bb5-97a6-4c1d-8c5a-154e49bfe9aa.dat,/tmp/tmp4ato2skh/files/d/a/4/dataset_da42f7ca-f21d-486d-8dbf-76967966324b.dat): pe, 18924 reads, 150 max length'2025-03-26 13:12:53 - b'INFO utils/utils.h : 152 - Real: 0.0488\tuser: 0.0468\tsys: 0.0029\tmaxrss: 20796'2025-03-26 13:12:53 - Start assembly. Number of CPU threads 1 2025-03-26 13:12:53 - k list: 21,29,39,59,79,99,119,141 2025-03-26 13:12:53 - Memory used: 150900867072025-03-26 13:12:53 - Extract solid (k+1)-mers for k = 21 2025-03-26 13:12:53 - Build graph for k = 21 2025-03-26 13:12:54 - Assemble contigs from SdBG for k = 212025-03-26 13:12:54 - Local assembly for k = 212025-03-26 13:12:55 - Extract iterative edges from k = 21 to 29 2025-03-26 13:12:55 - Build graph for k = 29 2025-03-26 13:12:55 - Assemble contigs from SdBG for k = 292025-03-26 13:12:56 - Local assembly for k = 292025-03-26 13:12:57 - Extract iterative edges from k = 29 to 39 2025-03-26 13:12:57 - Build graph for k = 39 2025-03-26 13:12:57 - Assemble contigs from SdBG for k = 392025-03-26 13:12:57 - Local assembly for k = 392025-03-26 13:12:58 - Extract iterative edges from k = 39 to 59 2025-03-26 13:12:58 - Build graph for k = 59 2025-03-26 13:12:58 - Assemble contigs from SdBG for k = 592025-03-26 13:12:59 - Local assembly for k = 592025-03-26 13:12:59 - Extract iterative edges from k = 59 to 79 2025-03-26 13:12:59 - Build graph for k = 79 2025-03-26 13:12:59 - Assemble contigs from SdBG for k = 792025-03-26 13:13:00 - Local assembly for k = 792025-03-26 13:13:00 - Extract iterative edges from k = 79 to 99 2025-03-26 13:13:00 - Build graph for k = 99 2025-03-26 13:13:00 - Assemble contigs from SdBG for k = 992025-03-26 13:13:01 - Local assembly for k = 992025-03-26 13:13:01 - Extract iterative edges from k = 99 to 119 2025-03-26 13:13:01 - Build graph for k = 119 2025-03-26 13:13:01 - Assemble contigs from SdBG for k = 1192025-03-26 13:13:01 - Local assembly for k = 1192025-03-26 13:13:02 - Extract iterative edges from k = 119 to 141 2025-03-26 13:13:02 - Build graph for k = 141 2025-03-26 13:13:02 - Assemble contigs from SdBG for k = 1412025-03-26 13:13:02 - Merging to output final contigs 2025-03-26 13:13:02 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp2025-03-26 13:13:02 - ALL DONE. Time elapsed: 9.585234 seconds
Building a SMALL indexRenaming genome.3.bt2.tmp to genome.3.bt2Renaming genome.4.bt2.tmp to genome.4.bt2Renaming genome.1.bt2.tmp to genome.1.bt2Renaming genome.2.bt2.tmp to genome.2.bt2Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt29462 reads; of these: 9462 (100.00%) were paired; of these: 90 (0.95%) aligned concordantly 0 times 9300 (98.29%) aligned concordantly exactly 1 time 72 (0.76%) aligned concordantly >1 times ---- 90 pairs aligned concordantly 0 times; of these: 8 (8.89%) aligned discordantly 1 time ---- 82 pairs aligned 0 times concordantly or discordantly; of these: 164 mates make up the pairs; of these: 93 (56.71%) aligned 0 times 70 (42.68%) aligned exactly 1 time 1 (0.61%) aligned >1 times99.51% overall alignment rate
Standard Output:
Settings: Output files: "genome.*.bt2" Line rate: 6 (line is 64 bytes) Lines per side: 1 (side is 64 bytes) Offset rate: 4 (one in 16) FTable chars: 10 Strings: unpacked Max bucket size: default Max bucket size, sqrt multiplier: default Max bucket size, len divisor: 4 Difference-cover sample period: 1024 Endianness: little Actual local endianness: little Sanity checking: disabled Assertions: disabled Random seed: 0 Sizeofs: void*:8, int:4, long:8, size_t:8Input files DNA, FASTA: /tmp/tmp4ato2skh/files/6/f/3/dataset_6f3f0843-9fa4-4df8-882c-b0cd443710de.datReading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 6; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 38016.9 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 7 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 18323 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 18324 for bucket 1Getting block 2 of 7 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 49606 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 49607 for bucket 2Getting block 3 of 7 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 45151 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 45152 for bucket 3Getting block 4 of 7 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 49787 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 49788 for bucket 4Getting block 5 of 7 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 28638 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 28639 for bucket 5Getting block 6 of 7 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 43194 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 43195 for bucket 6Getting block 7 of 7 Reserving size (49899) for bucket 7 Calculating Z arrays for bucket 7 Entering block accumulator loop for bucket 7: bucket 7: 10% bucket 7: 20% bucket 7: 30% bucket 7: 40% bucket 7: 50% bucket 7: 60% bucket 7: 70% bucket 7: 80% bucket 7: 90% bucket 7: 100% Sorting block of length 31419 for bucket 7 (Using difference cover) Sorting block time: 00:00:00Returning block of 31420 for bucket 7Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 0Total time for call to driver() for forward index: 00:00:00Reading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00 Time to reverse reference sequence: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 7; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 44353.2 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 6 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 47687 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 47688 for bucket 1Getting block 2 of 6 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 36636 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 36637 for bucket 2Getting block 3 of 6 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 49027 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 49028 for bucket 3Getting block 4 of 6 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 37449 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 37450 for bucket 4Getting block 5 of 6 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 47142 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 47143 for bucket 5Getting block 6 of 6 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 48178 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 48179 for bucket 6Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 1Total time for backward call to driver() for mirror index: 00:00:00
2025-04-08 14:05:03 - MEGAHIT v1.2.92025-04-08 14:05:03 - Using megahit_core with POPCNT and BMI2 support2025-04-08 14:05:03 - Convert reads to binary library2025-04-08 14:05:03 - b'INFO sequence/io/sequence_lib.cpp : 75 - Lib 0 (/tmp/tmpbo76svqj/files/b/2/7/dataset_b27ae4d4-9ab0-4bc6-b3d2-325dc79aa2b6.dat,/tmp/tmpbo76svqj/files/1/3/f/dataset_13fc15e4-5959-4e47-a36d-075c5894ac88.dat): pe, 18924 reads, 150 max length'2025-04-08 14:05:03 - b'INFO utils/utils.h : 152 - Real: 0.0460\tuser: 0.0401\tsys: 0.0070\tmaxrss: 20616'2025-04-08 14:05:03 - Start assembly. Number of CPU threads 1 2025-04-08 14:05:03 - k list: 21,29,39,59,79,99,119,141 2025-04-08 14:05:03 - Memory used: 150900867072025-04-08 14:05:03 - Extract solid (k+1)-mers for k = 21 2025-04-08 14:05:04 - Build graph for k = 21 2025-04-08 14:05:04 - Assemble contigs from SdBG for k = 212025-04-08 14:05:05 - Local assembly for k = 212025-04-08 14:05:06 - Extract iterative edges from k = 21 to 29 2025-04-08 14:05:06 - Build graph for k = 29 2025-04-08 14:05:06 - Assemble contigs from SdBG for k = 292025-04-08 14:05:06 - Local assembly for k = 292025-04-08 14:05:07 - Extract iterative edges from k = 29 to 39 2025-04-08 14:05:07 - Build graph for k = 39 2025-04-08 14:05:07 - Assemble contigs from SdBG for k = 392025-04-08 14:05:08 - Local assembly for k = 392025-04-08 14:05:09 - Extract iterative edges from k = 39 to 59 2025-04-08 14:05:09 - Build graph for k = 59 2025-04-08 14:05:09 - Assemble contigs from SdBG for k = 592025-04-08 14:05:09 - Local assembly for k = 592025-04-08 14:05:10 - Extract iterative edges from k = 59 to 79 2025-04-08 14:05:10 - Build graph for k = 79 2025-04-08 14:05:10 - Assemble contigs from SdBG for k = 792025-04-08 14:05:10 - Local assembly for k = 792025-04-08 14:05:11 - Extract iterative edges from k = 79 to 99 2025-04-08 14:05:11 - Build graph for k = 99 2025-04-08 14:05:11 - Assemble contigs from SdBG for k = 992025-04-08 14:05:11 - Local assembly for k = 992025-04-08 14:05:11 - Extract iterative edges from k = 99 to 119 2025-04-08 14:05:11 - Build graph for k = 119 2025-04-08 14:05:12 - Assemble contigs from SdBG for k = 1192025-04-08 14:05:12 - Local assembly for k = 1192025-04-08 14:05:12 - Extract iterative edges from k = 119 to 141 2025-04-08 14:05:12 - Build graph for k = 141 2025-04-08 14:05:12 - Assemble contigs from SdBG for k = 1412025-04-08 14:05:13 - Merging to output final contigs 2025-04-08 14:05:13 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp2025-04-08 14:05:13 - ALL DONE. Time elapsed: 9.814435 seconds
echo 50contig_reads && ln -s '/tmp/tmpbo76svqj/files/b/2/7/dataset_b27ae4d4-9ab0-4bc6-b3d2-325dc79aa2b6.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmpbo76svqj/files/1/3/f/dataset_13fc15e4-5959-4e47-a36d-075c5894ac88.dat' 'pe2-50contig_reads.fastqsanger.gz' && metaquast --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmpbo76svqj/job_working_directory/000/11/outputs/dataset_ee0c24d5-377a-4588-8297-6744fc1cabb7_files' && cp outputdir/combined_reference/*.html '/tmp/tmpbo76svqj/job_working_directory/000/11/outputs/dataset_ee0c24d5-377a-4588-8297-6744fc1cabb7_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmpbo76svqj/job_working_directory/000/11/outputs/dataset_ee0c24d5-377a-4588-8297-6744fc1cabb7_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads/usr/local/opt/quast-5.3.0/metaquast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --labels 50contig_reads -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-08 14:06:14Logging to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat ==> 50contig_readsNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat -o /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir --labels 50contig_readsVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-08 14:06:15Logging to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmpbo76svqj/job_working_directory/000/11/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat ==> 50contig_reads2025-04-08 14:06:15Running Reads analyzer...NOTICE: Permission denied accessing /usr/local/lib/python3.12/site-packages/quast_libs/gridss. GRIDSS will be downloaded to home directory /tmp/tmpbo76svqj/job_working_directory/000/11/home/.quastDownloading gridss (file: gridss-1.4.1.jar)... 0.0% of 38935087 bytes 1.0% of 38935087 bytes 2.0% of 38935087 bytes 3.0% of 38935087 bytes 4.0% of 38935087 bytes 5.0% of 38935087 bytes 6.0% of 38935087 bytes 7.0% of 38935087 bytes 8.0% of 38935087 bytes 9.0% of 38935087 bytes 10.0% of 38935087 bytes 11.0% of 38935087 bytes 12.0% of 38935087 bytes 13.0% of 38935087 bytes 14.0% of 38935087 bytes 15.0% of 38935087 bytes 16.0% of 38935087 bytes 17.0% of 38935087 bytes 18.0% of 38935087 bytes 19.0% of 38935087 bytes 20.0% of 38935087 bytes 21.0% of 38935087 bytes 22.0% of 38935087 bytes 23.0% of 38935087 bytes 24.0% of 38935087 bytes 25.0% of 38935087 bytes 26.0% of 38935087 bytes 27.0% of 38935087 bytes 28.0% of 38935087 bytes 29.0% of 38935087 bytes 30.0% of 38935087 bytes 31.0% of 38935087 bytes 32.0% of 38935087 bytes 33.0% of 38935087 bytes 34.0% of 38935087 bytes 35.0% of 38935087 bytes 36.0% of 38935087 bytes 37.0% of 38935087 bytes 38.0% of 38935087 bytes 39.0% of 38935087 bytes 40.0% of 38935087 bytes 41.0% of 38935087 bytes 42.0% of 38935087 bytes 43.0% of 38935087 bytes 44.0% of 38935087 bytes 45.0% of 38935087 bytes 46.0% of 38935087 bytes 47.0% of 38935087 bytes 48.0% of 38935087 bytes 49.0% of 38935087 bytes 50.0% of 38935087 bytes 51.0% of 38935087 bytes 52.0% of 38935087 bytes 53.0% of 38935087 bytes 54.0% of 38935087 bytes 55.0% of 38935087 bytes 56.0% of 38935087 bytes 57.0% of 38935087 bytes 58.0% of 38935087 bytes 59.0% of 38935087 bytes 60.0% of 38935087 bytes 61.0% of 38935087 bytes 62.0% of 38935087 bytes 63.0% of 38935087 bytes 64.0% of 38935087 bytes 65.0% of 38935087 bytes 66.0% of 38935087 bytes 67.0% of 38935087 bytes 68.0% of 38935087 bytes 69.0% of 38935087 bytes 70.0% of 38935087 bytes 71.0% of 38935087 bytes 72.0% of 38935087 bytes 73.0% of 38935087 bytes 74.0% of 38935087 bytes 75.0% of 38935087 bytes 76.0% of 38935087 bytes 77.0% of 38935087 bytes 78.0% of 38935087 bytes 79.0% of 38935087 bytes 80.0% of 38935087 bytes 81.0% of 38935087 bytes 82.0% of 38935087 bytes 83.0% of 38935087 bytes 84.0% of 38935087 bytes 85.0% of 38935087 bytes 86.0% of 38935087 bytes 87.0% of 38935087 bytes 88.0% of 38935087 bytes 88.0% of 38935087 bytes 89.0% of 38935087 bytes 90.0% of 38935087 bytes 91.0% of 38935087 bytes 92.0% of 38935087 bytes 93.0% of 38935087 bytes 94.0% of 38935087 bytes 95.0% of 38935087 bytes 96.0% of 38935087 bytes 97.0% of 38935087 bytes 98.0% of 38935087 bytes 99.0% of 38935087 bytesgridss successfully downloaded! Logging to files /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.log and /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.err... Pre-processing reads... Running BWA... Done. Sorting SAM-file... Analysis is finished. Creating total report... saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/reads_stats/reads_report.txt, reads_report.tsv, and reads_report.texDone.2025-04-08 14:06:20Running Basic statistics processor... Contig files: 50contig_reads Calculating N50 and L50... 50contig_reads, N50 = 4877, L50 = 22, auN = 5187.8, Total length = 265405, GC % = 36.45, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads GC content plot... saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/50contig_reads_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-04-08 14:06:21Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-04-08 14:06:21RESULTS: Text versions of total report are saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/icarus.html Log is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/quast.logFinished: 2025-04-08 14:06:21Elapsed time: 0:00:06.436382NOTICEs: 3; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
Building a SMALL indexRenaming genome.3.bt2.tmp to genome.3.bt2Renaming genome.4.bt2.tmp to genome.4.bt2Renaming genome.1.bt2.tmp to genome.1.bt2Renaming genome.2.bt2.tmp to genome.2.bt2Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt29462 reads; of these: 9462 (100.00%) were paired; of these: 90 (0.95%) aligned concordantly 0 times 9300 (98.29%) aligned concordantly exactly 1 time 72 (0.76%) aligned concordantly >1 times ---- 90 pairs aligned concordantly 0 times; of these: 8 (8.89%) aligned discordantly 1 time ---- 82 pairs aligned 0 times concordantly or discordantly; of these: 164 mates make up the pairs; of these: 93 (56.71%) aligned 0 times 70 (42.68%) aligned exactly 1 time 1 (0.61%) aligned >1 times99.51% overall alignment rate
Standard Output:
Settings: Output files: "genome.*.bt2" Line rate: 6 (line is 64 bytes) Lines per side: 1 (side is 64 bytes) Offset rate: 4 (one in 16) FTable chars: 10 Strings: unpacked Max bucket size: default Max bucket size, sqrt multiplier: default Max bucket size, len divisor: 4 Difference-cover sample period: 1024 Endianness: little Actual local endianness: little Sanity checking: disabled Assertions: disabled Random seed: 0 Sizeofs: void*:8, int:4, long:8, size_t:8Input files DNA, FASTA: /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.datReading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 6; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 38016.9 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 7 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 18323 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 18324 for bucket 1Getting block 2 of 7 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 49606 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 49607 for bucket 2Getting block 3 of 7 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 45151 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 45152 for bucket 3Getting block 4 of 7 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 49787 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 49788 for bucket 4Getting block 5 of 7 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 28638 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 28639 for bucket 5Getting block 6 of 7 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 43194 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 43195 for bucket 6Getting block 7 of 7 Reserving size (49899) for bucket 7 Calculating Z arrays for bucket 7 Entering block accumulator loop for bucket 7: bucket 7: 10% bucket 7: 20% bucket 7: 30% bucket 7: 40% bucket 7: 50% bucket 7: 60% bucket 7: 70% bucket 7: 80% bucket 7: 90% bucket 7: 100% Sorting block of length 31419 for bucket 7 (Using difference cover) Sorting block time: 00:00:00Returning block of 31420 for bucket 7Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 0Total time for call to driver() for forward index: 00:00:00Reading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00 Time to reverse reference sequence: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 7; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 44353.2 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 6 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 47687 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 47688 for bucket 1Getting block 2 of 6 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 36636 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 36637 for bucket 2Getting block 3 of 6 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 49027 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 49028 for bucket 3Getting block 4 of 6 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 37449 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 37450 for bucket 4Getting block 5 of 6 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 47142 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 47143 for bucket 5Getting block 6 of 6 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 48178 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 48179 for bucket 6Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 1Total time for backward call to driver() for mirror index: 00:00:00
2025-04-08 14:07:04 7565e4633be5 SemiBin[10] INFO Binning for short_read2025-04-08 14:07:09 7565e4633be5 SemiBin[10] INFO Did not detect GPU, using CPU.2025-04-08 14:07:09 7565e4633be5 SemiBin[10] INFO Generating training data...2025-04-08 14:07:10 7565e4633be5 SemiBin[10] INFO Calculating coverage for every sample.2025-04-08 14:07:10 7565e4633be5 SemiBin[10] INFO Processed: 50contig_reads.bam2025-04-08 14:07:11 7565e4633be5 SemiBin[10] INFO Start binning.2025-04-08 14:07:13 7565e4633be5 SemiBin[10] INFO Number of bins prior to reclustering: 12025-04-08 14:07:13 7565e4633be5 SemiBin[10] INFO Running naive ORF finder2025-04-08 14:07:14 7565e4633be5 SemiBin[10] INFO Number of bins after reclustering: 12025-04-08 14:07:14 7565e4633be5 SemiBin[10] INFO Binning finished
Standard Output:
If you find SemiBin useful, please cite: Pan, S.; Zhu, C.; Zhao, XM.; Coelho, LP. A deep siamese neural network improves metagenome-assembled genomes in microbiome datasets across different environments. Nat Commun 13, 2326 (2022). https://doi.org/10.1038/s41467-022-29843-y Pan, S.; Zhao, XM; Coelho, LP. SemiBin2: self-supervised contrastive learning leads to better MAGs for short- and long-read sequencing. Bioinformatics Volume 39, Issue Supplement_1, June 2023, Pages i21–i29. https://doi.org/10.1093/bioinformatics/btad209output50contig_reads.bam_0_data_cov.csvSemiBinRun.logcontig_bins.tsvdata.csvdata_split.csvoutput_binsrecluster_bins_info.tsv
Output depth matrix to /tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.datMinimum percent identity for a mapped read: 0.97minMapQual: 0weightMapQual: 0Edge bases will be included up to 75 basesshredLength: 16000shredDepth: 5minContigLength: 1minContigDepth: 0jgi_summarize_bam_contig_depths 2.17 (Bioconda) 2024-12-15T06:34:17Running with 4 threads to save memory you can reduce the number of threads with the OMP_NUM_THREADS variableOutput matrix to /tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.datOpening all bam files and validating headersProcessing bam files with largest_contig=0Thread 0 opening and reading the header for file: /tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.datThread 0 opened the file: /tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.datThread 0 processing bam 0: dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.datThread 0 finished reading bam 0: dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.datThread 0 finished: dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat with 18924 reads and 8473 readsWellMapped (44.7738%)Creating depth matrix file: /tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.datClosing last bam fileFinished
WARNING:root:CONCOCT is running in single threaded mode. Please, consider adjusting the --threads parameter.Up and running. Check /tmp/tmpbo76svqj/job_working_directory/000/18/working/outdir/log.txt for progressSetting 1 OMP threadsGenerate input data0,-32769.159419,695.8618811,-23702.908064,9066.2513542,-10301.271601,13401.6364633,-9099.561931,1201.7096704,-8491.920398,607.6415335,-8491.793887,0.1265126,-8491.793444,0.0004437,-8491.793370,0.000074
Attaching package: ‘gplots’The following object is masked from ‘package:stats’: lowess
Standard Output:
MaxBin 2.2.7Input contig: /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.datout header: outMin contig length: 1000Max iteration: 50Probability threshold: 0.5Thread: 1Located abundance file [/tmp/tmpbo76svqj/files/3/b/a/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat]Searching against 107 marker genes to find starting seed contigs for [/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat]...Running FragGeneScan....Running HMMER hmmsearch....Try harder to dig out marker genes from contigs.Done data collection. Running MaxBin...Command: /usr/local/opt/MaxBin-2.2.7/src/MaxBin -fasta out.contig.tmp -abund out.contig.tmp.abund1 -seed out.seed -out out -min_contig_length 1000 -max_run 50 -prob_threshold 0.5 Minimum contig length set to 1000.Reading seed list...Looking for seeds in sequences. k141_52 [11001.000000] k141_59 [9465.000000]Get 2 seeds.Start EM process.Iteration 1Iteration 2Iteration 3Iteration 4Iteration 5Iteration 6Iteration 7Iteration 8Iteration 9Iteration 10Iteration 11Iteration 12Iteration 13EM finishes successfully.Classifying sequences based on the EM result.Minimum probability for binning: 0.50Ignoring 0 bins without any sequences.Number of unclassified sequences: 0 (0.00%)Elapsed time: 0 days 00:00:00Rscript /usr/local/opt/MaxBin-2.2.7/heatmap.r out.marker out.marker.pdfnull device 1 out.001.marker.fastaout.002.marker.fastaDeleting intermediate files.========== Job finished ==========Yielded 2 bins for contig (scaffold) file /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.datHere are the output files for this run.Please refer to the README file for further details.Summary file: out.summaryMarker counts: out.markerMarker genes for each bin: out.marker_of_each_gene.tar.gzBin files: out.001.fasta - out.002.fastaUnbinned sequences: out.noclassMarker plot: out.marker.pdf========== Elapsed Time ==========0 hours 0 minutes and 2 seconds.
[Warning!] Negative coverage depth is not allowed for the contig k141_0, column 1: -4.30218e+08[Warning!] Negative coverage depth is not allowed for the contig k141_52, column 1: -2.7651e+08
[04/08/2025 02:56:58 PM] INFO: Running CheckM2 version 1.0.2[04/08/2025 02:56:58 PM] INFO: Custom database path provided for predict run. Checking database at /cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd...[04/08/2025 02:57:04 PM] INFO: Running quality prediction workflow with 1 threads.[04/08/2025 02:57:05 PM] INFO: Calling genes in 4 bins with 1 threads:[04/08/2025 02:57:07 PM] INFO: Calculating metadata for 4 bins with 1 threads:[04/08/2025 02:57:07 PM] INFO: Annotating input genomes with DIAMOND using 1 threads[04/08/2025 02:59:54 PM] INFO: Processing DIAMOND output[04/08/2025 02:59:54 PM] INFO: Predicting completeness and contamination using ML models.[04/08/2025 02:59:59 PM] INFO: Parsing all results and constructing final output table.[04/08/2025 02:59:59 PM] INFO: CheckM2 finished successfully.
Standard Output:
Finished processing 1 of 4 (25.00%) bins. Finished processing 2 of 4 (50.00%) bins. Finished processing 3 of 4 (75.00%) bins. Finished processing 4 of 4 (100.00%) bins. Finished processing 1 of 4 (25.00%) bin metadata. Finished processing 2 of 4 (50.00%) bin metadata. Finished processing 3 of 4 (75.00%) bin metadata. Finished processing 4 of 4 (100.00%) bin metadata.
Output with path /tmp/tmpfh4xfkb2/Quast on data 12, data 11, and data 20 HTML report for combined reference genome__a81ec982-15db-4182-89f1-1c9dd6c0dfdd different than expected
Expected file size of 363000+-5000 found 372079
2025-04-10 08:34:46 - MEGAHIT v1.2.92025-04-10 08:34:46 - Using megahit_core with POPCNT and BMI2 support2025-04-10 08:34:46 - Convert reads to binary library2025-04-10 08:34:46 - b'INFO sequence/io/sequence_lib.cpp : 75 - Lib 0 (/tmp/tmprt3848zx/files/2/8/2/dataset_28228683-09f8-4a16-a632-dc47b6787386.dat,/tmp/tmprt3848zx/files/3/a/1/dataset_3a16e83b-8f5d-4f72-9525-9611cef44b4e.dat): pe, 18924 reads, 150 max length'2025-04-10 08:34:46 - b'INFO utils/utils.h : 152 - Real: 0.0554\tuser: 0.0492\tsys: 0.0069\tmaxrss: 20844'2025-04-10 08:34:46 - Start assembly. Number of CPU threads 1 2025-04-10 08:34:46 - k list: 21,29,39,59,79,99,119,141 2025-04-10 08:34:46 - Memory used: 150900903932025-04-10 08:34:46 - Extract solid (k+1)-mers for k = 21 2025-04-10 08:34:47 - Build graph for k = 21 2025-04-10 08:34:47 - Assemble contigs from SdBG for k = 212025-04-10 08:34:47 - Local assembly for k = 212025-04-10 08:34:48 - Extract iterative edges from k = 21 to 29 2025-04-10 08:34:48 - Build graph for k = 29 2025-04-10 08:34:49 - Assemble contigs from SdBG for k = 292025-04-10 08:34:49 - Local assembly for k = 292025-04-10 08:34:50 - Extract iterative edges from k = 29 to 39 2025-04-10 08:34:50 - Build graph for k = 39 2025-04-10 08:34:50 - Assemble contigs from SdBG for k = 392025-04-10 08:34:51 - Local assembly for k = 392025-04-10 08:34:51 - Extract iterative edges from k = 39 to 59 2025-04-10 08:34:51 - Build graph for k = 59 2025-04-10 08:34:52 - Assemble contigs from SdBG for k = 592025-04-10 08:34:52 - Local assembly for k = 592025-04-10 08:34:52 - Extract iterative edges from k = 59 to 79 2025-04-10 08:34:52 - Build graph for k = 79 2025-04-10 08:34:53 - Assemble contigs from SdBG for k = 792025-04-10 08:34:53 - Local assembly for k = 792025-04-10 08:34:53 - Extract iterative edges from k = 79 to 99 2025-04-10 08:34:53 - Build graph for k = 99 2025-04-10 08:34:54 - Assemble contigs from SdBG for k = 992025-04-10 08:34:54 - Local assembly for k = 992025-04-10 08:34:54 - Extract iterative edges from k = 99 to 119 2025-04-10 08:34:54 - Build graph for k = 119 2025-04-10 08:34:54 - Assemble contigs from SdBG for k = 1192025-04-10 08:34:55 - Local assembly for k = 1192025-04-10 08:34:55 - Extract iterative edges from k = 119 to 141 2025-04-10 08:34:55 - Build graph for k = 141 2025-04-10 08:34:55 - Assemble contigs from SdBG for k = 1412025-04-10 08:34:55 - Merging to output final contigs 2025-04-10 08:34:55 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp2025-04-10 08:34:55 - ALL DONE. Time elapsed: 9.692088 seconds
echo 50contig_reads && ln -s '/tmp/tmprt3848zx/files/2/8/2/dataset_28228683-09f8-4a16-a632-dc47b6787386.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmprt3848zx/files/3/a/1/dataset_3a16e83b-8f5d-4f72-9525-9611cef44b4e.dat' 'pe2-50contig_reads.fastqsanger.gz' && metaquast --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/11/outputs/dataset_a81ec982-15db-4182-89f1-1c9dd6c0dfdd_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/11/outputs/dataset_a81ec982-15db-4182-89f1-1c9dd6c0dfdd_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/11/outputs/dataset_a81ec982-15db-4182-89f1-1c9dd6c0dfdd_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads/usr/local/opt/quast-5.3.0/metaquast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --labels 50contig_reads -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 08:35:59Logging to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat ==> 50contig_readsNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat -o /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir --labels 50contig_readsVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 08:36:00Logging to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmprt3848zx/job_working_directory/000/11/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat ==> 50contig_reads2025-04-10 08:36:00Running Reads analyzer...NOTICE: Permission denied accessing /usr/local/lib/python3.12/site-packages/quast_libs/gridss. GRIDSS will be downloaded to home directory /tmp/tmprt3848zx/job_working_directory/000/11/home/.quastDownloading gridss (file: gridss-1.4.1.jar)... 0.0% of 38935087 bytes 1.0% of 38935087 bytes 2.0% of 38935087 bytes 3.0% of 38935087 bytes 4.0% of 38935087 bytes 5.0% of 38935087 bytes 6.0% of 38935087 bytes 7.0% of 38935087 bytes 8.0% of 38935087 bytes 9.0% of 38935087 bytes 10.0% of 38935087 bytes 11.0% of 38935087 bytes 12.0% of 38935087 bytes 13.0% of 38935087 bytes 14.0% of 38935087 bytes 15.0% of 38935087 bytes 16.0% of 38935087 bytes 17.0% of 38935087 bytes 18.0% of 38935087 bytes 19.0% of 38935087 bytes 20.0% of 38935087 bytes 21.0% of 38935087 bytes 22.0% of 38935087 bytes 23.0% of 38935087 bytes 24.0% of 38935087 bytes 25.0% of 38935087 bytes 26.0% of 38935087 bytes 27.0% of 38935087 bytes 28.0% of 38935087 bytes 29.0% of 38935087 bytes 30.0% of 38935087 bytes 31.0% of 38935087 bytes 32.0% of 38935087 bytes 33.0% of 38935087 bytes 34.0% of 38935087 bytes 35.0% of 38935087 bytes 36.0% of 38935087 bytes 37.0% of 38935087 bytes 38.0% of 38935087 bytes 39.0% of 38935087 bytes 40.0% of 38935087 bytes 41.0% of 38935087 bytes 42.0% of 38935087 bytes 43.0% of 38935087 bytes 44.0% of 38935087 bytes 45.0% of 38935087 bytes 46.0% of 38935087 bytes 47.0% of 38935087 bytes 48.0% of 38935087 bytes 49.0% of 38935087 bytes 50.0% of 38935087 bytes 51.0% of 38935087 bytes 52.0% of 38935087 bytes 53.0% of 38935087 bytes 54.0% of 38935087 bytes 55.0% of 38935087 bytes 56.0% of 38935087 bytes 57.0% of 38935087 bytes 58.0% of 38935087 bytes 59.0% of 38935087 bytes 60.0% of 38935087 bytes 61.0% of 38935087 bytes 62.0% of 38935087 bytes 63.0% of 38935087 bytes 64.0% of 38935087 bytes 65.0% of 38935087 bytes 66.0% of 38935087 bytes 67.0% of 38935087 bytes 68.0% of 38935087 bytes 69.0% of 38935087 bytes 70.0% of 38935087 bytes 71.0% of 38935087 bytes 72.0% of 38935087 bytes 73.0% of 38935087 bytes 74.0% of 38935087 bytes 75.0% of 38935087 bytes 76.0% of 38935087 bytes 77.0% of 38935087 bytes 78.0% of 38935087 bytes 79.0% of 38935087 bytes 80.0% of 38935087 bytes 81.0% of 38935087 bytes 82.0% of 38935087 bytes 83.0% of 38935087 bytes 84.0% of 38935087 bytes 85.0% of 38935087 bytes 86.0% of 38935087 bytes 87.0% of 38935087 bytes 88.0% of 38935087 bytes 88.0% of 38935087 bytes 89.0% of 38935087 bytes 90.0% of 38935087 bytes 91.0% of 38935087 bytes 92.0% of 38935087 bytes 93.0% of 38935087 bytes 94.0% of 38935087 bytes 95.0% of 38935087 bytes 96.0% of 38935087 bytes 97.0% of 38935087 bytes 98.0% of 38935087 bytes 99.0% of 38935087 bytesgridss successfully downloaded! Logging to files /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.log and /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.err... Pre-processing reads... Running BWA... Done. Sorting SAM-file... Analysis is finished. Creating total report... saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/reads_stats/reads_report.txt, reads_report.tsv, and reads_report.texDone.2025-04-10 08:36:05Running Basic statistics processor... Contig files: 50contig_reads Calculating N50 and L50... 50contig_reads, N50 = 4877, L50 = 22, auN = 5187.8, Total length = 265405, GC % = 36.45, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/50contig_reads_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-04-10 08:36:06Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-04-10 08:36:07RESULTS: Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/icarus.html Log is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/quast.logFinished: 2025-04-10 08:36:07Elapsed time: 0:00:06.967949NOTICEs: 3; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
Building a SMALL indexRenaming genome.3.bt2.tmp to genome.3.bt2Renaming genome.4.bt2.tmp to genome.4.bt2Renaming genome.1.bt2.tmp to genome.1.bt2Renaming genome.2.bt2.tmp to genome.2.bt2Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt29462 reads; of these: 9462 (100.00%) were paired; of these: 90 (0.95%) aligned concordantly 0 times 9300 (98.29%) aligned concordantly exactly 1 time 72 (0.76%) aligned concordantly >1 times ---- 90 pairs aligned concordantly 0 times; of these: 8 (8.89%) aligned discordantly 1 time ---- 82 pairs aligned 0 times concordantly or discordantly; of these: 164 mates make up the pairs; of these: 93 (56.71%) aligned 0 times 70 (42.68%) aligned exactly 1 time 1 (0.61%) aligned >1 times99.51% overall alignment rate
Standard Output:
Settings: Output files: "genome.*.bt2" Line rate: 6 (line is 64 bytes) Lines per side: 1 (side is 64 bytes) Offset rate: 4 (one in 16) FTable chars: 10 Strings: unpacked Max bucket size: default Max bucket size, sqrt multiplier: default Max bucket size, len divisor: 4 Difference-cover sample period: 1024 Endianness: little Actual local endianness: little Sanity checking: disabled Assertions: disabled Random seed: 0 Sizeofs: void*:8, int:4, long:8, size_t:8Input files DNA, FASTA: /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.datReading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 6; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 38016.9 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 7 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 18323 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 18324 for bucket 1Getting block 2 of 7 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 49606 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 49607 for bucket 2Getting block 3 of 7 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 45151 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 45152 for bucket 3Getting block 4 of 7 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 49787 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 49788 for bucket 4Getting block 5 of 7 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 28638 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 28639 for bucket 5Getting block 6 of 7 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 43194 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 43195 for bucket 6Getting block 7 of 7 Reserving size (49899) for bucket 7 Calculating Z arrays for bucket 7 Entering block accumulator loop for bucket 7: bucket 7: 10% bucket 7: 20% bucket 7: 30% bucket 7: 40% bucket 7: 50% bucket 7: 60% bucket 7: 70% bucket 7: 80% bucket 7: 90% bucket 7: 100% Sorting block of length 31419 for bucket 7 (Using difference cover) Sorting block time: 00:00:00Returning block of 31420 for bucket 7Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 0Total time for call to driver() for forward index: 00:00:00Reading reference sizes Time reading reference sizes: 00:00:00Calculating joined lengthWriting headerReserving space for joined stringJoining reference sequences Time to join reference sequences: 00:00:00 Time to reverse reference sequence: 00:00:00bmax according to bmaxDivN setting: 66531Using parameters --bmax 49899 --dcv 1024 Doing ahead-of-time memory usage test Passed! Constructing with these parameters: --bmax 49899 --dcv 1024Constructing suffix-array element generatorBuilding DifferenceCoverSample Building sPrime Building sPrimeOrder V-Sorting samples V-Sorting samples time: 00:00:00 Allocating rank array Ranking v-sort output Ranking v-sort output time: 00:00:00 Invoking Larsson-Sadakane on ranks Invoking Larsson-Sadakane on ranks time: 00:00:00 Sanity-checking and returningBuilding samplesReserving space for 12 sample suffixesGenerating random suffixesQSorting 12 sample offsets, eliminating duplicatesQSorting sample offsets, eliminating duplicates time: 00:00:00Multikey QSorting 12 samples (Using difference cover) Multikey QSorting samples time: 00:00:00Calculating bucket sizesSplitting and merging Splitting and merging time: 00:00:00Split 1, merged 7; iterating...Splitting and merging Splitting and merging time: 00:00:00Avg bucket size: 44353.2 (target: 49898)Converting suffix-array elements to index imageAllocating ftab, absorbFtabEntering Ebwt loopGetting block 1 of 6 Reserving size (49899) for bucket 1 Calculating Z arrays for bucket 1 Entering block accumulator loop for bucket 1: bucket 1: 10% bucket 1: 20% bucket 1: 30% bucket 1: 40% bucket 1: 50% bucket 1: 60% bucket 1: 70% bucket 1: 80% bucket 1: 90% bucket 1: 100% Sorting block of length 47687 for bucket 1 (Using difference cover) Sorting block time: 00:00:00Returning block of 47688 for bucket 1Getting block 2 of 6 Reserving size (49899) for bucket 2 Calculating Z arrays for bucket 2 Entering block accumulator loop for bucket 2: bucket 2: 10% bucket 2: 20% bucket 2: 30% bucket 2: 40% bucket 2: 50% bucket 2: 60% bucket 2: 70% bucket 2: 80% bucket 2: 90% bucket 2: 100% Sorting block of length 36636 for bucket 2 (Using difference cover) Sorting block time: 00:00:00Returning block of 36637 for bucket 2Getting block 3 of 6 Reserving size (49899) for bucket 3 Calculating Z arrays for bucket 3 Entering block accumulator loop for bucket 3: bucket 3: 10% bucket 3: 20% bucket 3: 30% bucket 3: 40% bucket 3: 50% bucket 3: 60% bucket 3: 70% bucket 3: 80% bucket 3: 90% bucket 3: 100% Sorting block of length 49027 for bucket 3 (Using difference cover) Sorting block time: 00:00:00Returning block of 49028 for bucket 3Getting block 4 of 6 Reserving size (49899) for bucket 4 Calculating Z arrays for bucket 4 Entering block accumulator loop for bucket 4: bucket 4: 10% bucket 4: 20% bucket 4: 30% bucket 4: 40% bucket 4: 50% bucket 4: 60% bucket 4: 70% bucket 4: 80% bucket 4: 90% bucket 4: 100% Sorting block of length 37449 for bucket 4 (Using difference cover) Sorting block time: 00:00:00Returning block of 37450 for bucket 4Getting block 5 of 6 Reserving size (49899) for bucket 5 Calculating Z arrays for bucket 5 Entering block accumulator loop for bucket 5: bucket 5: 10% bucket 5: 20% bucket 5: 30% bucket 5: 40% bucket 5: 50% bucket 5: 60% bucket 5: 70% bucket 5: 80% bucket 5: 90% bucket 5: 100% Sorting block of length 47142 for bucket 5 (Using difference cover) Sorting block time: 00:00:00Returning block of 47143 for bucket 5Getting block 6 of 6 Reserving size (49899) for bucket 6 Calculating Z arrays for bucket 6 Entering block accumulator loop for bucket 6: bucket 6: 10% bucket 6: 20% bucket 6: 30% bucket 6: 40% bucket 6: 50% bucket 6: 60% bucket 6: 70% bucket 6: 80% bucket 6: 90% bucket 6: 100% Sorting block of length 48178 for bucket 6 (Using difference cover) Sorting block time: 00:00:00Returning block of 48179 for bucket 6Exited Ebwt loopfchr[A]: 0fchr[C]: 84325fchr[G]: 133305fchr[T]: 181307fchr[$]: 266124Exiting Ebwt::buildToDisk()Returning from initFromVectorWrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmpWrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmpRe-opening _in1 and _in2 as input streamsReturning from Ebwt constructorHeaders: len: 266124 bwtLen: 266125 sz: 66531 bwtSz: 66532 lineRate: 6 offRate: 4 offMask: 0xfffffff0 ftabChars: 10 eftabLen: 20 eftabSz: 80 ftabLen: 1048577 ftabSz: 4194308 offsLen: 16633 offsSz: 66532 lineSz: 64 sideSz: 64 sideBwtSz: 48 sideBwtLen: 192 numSides: 1387 numLines: 1387 ebwtTotLen: 88768 ebwtTotSz: 88768 color: 0 reverse: 1Total time for backward call to driver() for mirror index: 00:00:00
2025-04-10 08:36:39 dd5582054a32 SemiBin[9] INFO Binning for short_read2025-04-10 08:36:44 dd5582054a32 SemiBin[9] INFO Did not detect GPU, using CPU.2025-04-10 08:36:44 dd5582054a32 SemiBin[9] INFO Generating training data...2025-04-10 08:36:45 dd5582054a32 SemiBin[9] INFO Calculating coverage for every sample.2025-04-10 08:36:45 dd5582054a32 SemiBin[9] INFO Processed: 50contig_reads.bam2025-04-10 08:36:45 dd5582054a32 SemiBin[9] INFO Start binning.2025-04-10 08:36:47 dd5582054a32 SemiBin[9] INFO Number of bins prior to reclustering: 12025-04-10 08:36:47 dd5582054a32 SemiBin[9] INFO Running naive ORF finder2025-04-10 08:36:48 dd5582054a32 SemiBin[9] INFO Number of bins after reclustering: 12025-04-10 08:36:48 dd5582054a32 SemiBin[9] INFO Binning finished
Standard Output:
If you find SemiBin useful, please cite: Pan, S.; Zhu, C.; Zhao, XM.; Coelho, LP. A deep siamese neural network improves metagenome-assembled genomes in microbiome datasets across different environments. Nat Commun 13, 2326 (2022). https://doi.org/10.1038/s41467-022-29843-y Pan, S.; Zhao, XM; Coelho, LP. SemiBin2: self-supervised contrastive learning leads to better MAGs for short- and long-read sequencing. Bioinformatics Volume 39, Issue Supplement_1, June 2023, Pages i21–i29. https://doi.org/10.1093/bioinformatics/btad209output50contig_reads.bam_0_data_cov.csvSemiBinRun.logcontig_bins.tsvdata.csvdata_split.csvoutput_binsrecluster_bins_info.tsv
Output depth matrix to /tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.datMinimum percent identity for a mapped read: 0.97minMapQual: 0weightMapQual: 0Edge bases will be included up to 75 basesshredLength: 16000shredDepth: 5minContigLength: 1minContigDepth: 0jgi_summarize_bam_contig_depths 2.17 (Bioconda) 2024-12-15T06:34:17Running with 4 threads to save memory you can reduce the number of threads with the OMP_NUM_THREADS variableOutput matrix to /tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.datOpening all bam files and validating headersProcessing bam files with largest_contig=0Thread 0 opening and reading the header for file: /tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.datThread 0 opened the file: /tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.datThread 0 processing bam 0: dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.datThread 0 finished reading bam 0: dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.datThread 0 finished: dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat with 18924 reads and 8473 readsWellMapped (44.7738%)Creating depth matrix file: /tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.datClosing last bam fileFinished
WARNING:root:CONCOCT is running in single threaded mode. Please, consider adjusting the --threads parameter.Up and running. Check /tmp/tmprt3848zx/job_working_directory/000/18/working/outdir/log.txt for progressSetting 1 OMP threadsGenerate input data0,-32769.159419,695.8618811,-23702.908064,9066.2513542,-10301.271601,13401.6364633,-9099.561931,1201.7096704,-8491.920398,607.6415335,-8491.793887,0.1265126,-8491.793444,0.0004437,-8491.793370,0.000074
Attaching package: ‘gplots’The following object is masked from ‘package:stats’: lowess
Standard Output:
MaxBin 2.2.7Input contig: /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.datout header: outMin contig length: 1000Max iteration: 50Probability threshold: 0.5Thread: 1Located abundance file [/tmp/tmprt3848zx/files/3/1/9/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat]Searching against 107 marker genes to find starting seed contigs for [/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat]...Running FragGeneScan....Running HMMER hmmsearch....Try harder to dig out marker genes from contigs.Done data collection. Running MaxBin...Command: /usr/local/opt/MaxBin-2.2.7/src/MaxBin -fasta out.contig.tmp -abund out.contig.tmp.abund1 -seed out.seed -out out -min_contig_length 1000 -max_run 50 -prob_threshold 0.5 Minimum contig length set to 1000.Reading seed list...Looking for seeds in sequences. k141_52 [11001.000000] k141_59 [9465.000000]Get 2 seeds.Start EM process.Iteration 1Iteration 2Iteration 3Iteration 4Iteration 5Iteration 6Iteration 7Iteration 8Iteration 9Iteration 10Iteration 11Iteration 12Iteration 13EM finishes successfully.Classifying sequences based on the EM result.Minimum probability for binning: 0.50Ignoring 0 bins without any sequences.Number of unclassified sequences: 0 (0.00%)Elapsed time: 0 days 00:00:00Rscript /usr/local/opt/MaxBin-2.2.7/heatmap.r out.marker out.marker.pdfnull device 1 out.001.marker.fastaout.002.marker.fastaDeleting intermediate files.========== Job finished ==========Yielded 2 bins for contig (scaffold) file /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.datHere are the output files for this run.Please refer to the README file for further details.Summary file: out.summaryMarker counts: out.markerMarker genes for each bin: out.marker_of_each_gene.tar.gzBin files: out.001.fasta - out.002.fastaUnbinned sequences: out.noclassMarker plot: out.marker.pdf========== Elapsed Time ==========0 hours 0 minutes and 2 seconds.
[Warning!] Negative coverage depth is not allowed for the contig k141_0, column 1: -4.29775e+08[Warning!] Negative coverage depth is not allowed for the contig k141_52, column 1: -2.76243e+08
[04/10/2025 09:08:48 AM] INFO: Running CheckM2 version 1.0.2[04/10/2025 09:08:48 AM] INFO: Custom database path provided for predict run. Checking database at /cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd...[04/10/2025 09:08:51 AM] INFO: Running quality prediction workflow with 1 threads.[04/10/2025 09:08:52 AM] INFO: Calling genes in 4 bins with 1 threads:[04/10/2025 09:08:53 AM] INFO: Calculating metadata for 4 bins with 1 threads:[04/10/2025 09:08:54 AM] INFO: Annotating input genomes with DIAMOND using 1 threads[04/10/2025 09:11:41 AM] INFO: Processing DIAMOND output[04/10/2025 09:11:41 AM] INFO: Predicting completeness and contamination using ML models.[04/10/2025 09:11:46 AM] INFO: Parsing all results and constructing final output table.[04/10/2025 09:11:46 AM] INFO: CheckM2 finished successfully.
Standard Output:
Finished processing 1 of 4 (25.00%) bins. Finished processing 2 of 4 (50.00%) bins. Finished processing 3 of 4 (75.00%) bins. Finished processing 4 of 4 (100.00%) bins. Finished processing 1 of 4 (25.00%) bin metadata. Finished processing 2 of 4 (50.00%) bin metadata. Finished processing 3 of 4 (75.00%) bin metadata. Finished processing 4 of 4 (100.00%) bin metadata.
[04/10/2025 09:13:19 AM] INFO: Running CheckM2 version 1.0.2[04/10/2025 09:13:19 AM] INFO: Custom database path provided for predict run. Checking database at /cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd...[04/10/2025 09:13:25 AM] INFO: Running quality prediction workflow with 1 threads.[04/10/2025 09:13:26 AM] INFO: Calling genes in 4 bins with 1 threads:[04/10/2025 09:13:28 AM] INFO: Calculating metadata for 4 bins with 1 threads:[04/10/2025 09:13:28 AM] INFO: Annotating input genomes with DIAMOND using 1 threads[04/10/2025 09:16:14 AM] INFO: Processing DIAMOND output[04/10/2025 09:16:14 AM] INFO: Predicting completeness and contamination using ML models.[04/10/2025 09:16:19 AM] INFO: Parsing all results and constructing final output table.[04/10/2025 09:16:19 AM] INFO: CheckM2 finished successfully.
Standard Output:
Finished processing 1 of 4 (25.00%) bins. Finished processing 2 of 4 (50.00%) bins. Finished processing 3 of 4 (75.00%) bins. Finished processing 4 of 4 (100.00%) bins. Finished processing 1 of 4 (25.00%) bin metadata. Finished processing 2 of 4 (50.00%) bin metadata. Finished processing 3 of 4 (75.00%) bin metadata. Finished processing 4 of 4 (100.00%) bin metadata.
[2025-04-10T09:13:31Z INFO bird_tool_utils::clap_utils] CoverM version 0.7.0[2025-04-10T09:13:31Z INFO coverm] Writing output to file: /tmp/tmprt3848zx/job_working_directory/000/37/outputs/dataset_4b61af85-2fd5-4880-b3da-c60367d51d8c.dat[2025-04-10T09:13:31Z INFO coverm] Using min-covered-fraction 10%
[2025-04-10T09:13:31Z INFO coverm] Using min-read-percent-identity 0%
[2025-04-10T09:13:31Z INFO coverm] Using min-read-aligned-percent 0%
[2025-04-10T09:13:31Z INFO bird_tool_utils::external_command_checker] Found minimap2 version 2.28-r1209 [2025-04-10T09:13:31Z INFO bird_tool_utils::external_command_checker] Found samtools version 1.21 [2025-04-10T09:13:31Z INFO coverm] Profiling 4 genomes[2025-04-10T09:13:31Z INFO coverm] Generating concatenated reference FASTA file of 4 genomes ..[2025-04-10T09:13:31Z INFO coverm] Not pre-generating minimap2 index[2025-04-10T09:13:31Z INFO coverm] Using min-read-percent-identity 0%
[2025-04-10T09:13:31Z INFO coverm] Using min-read-aligned-percent 0%
[2025-04-10T09:13:32Z INFO coverm::genome] In sample '50contig_reads', found 18791 reads mapped out of 18924 total (99.30%)
echo 50contig_reads_bin_1_fasta && metaquast --labels '50contig_reads_bin_1_fasta' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/38/outputs/dataset_57758adf-d8de-476a-88f0-16563ce1b032_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/38/outputs/dataset_57758adf-d8de-476a-88f0-16563ce1b032_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/38/outputs/dataset_57758adf-d8de-476a-88f0-16563ce1b032_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads_bin_1_fasta/usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_1_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:04Logging to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat ==> 50contig_reads_bin_1_fastaNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat -o /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir --labels 50contig_reads_bin_1_fastaVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:05Logging to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmprt3848zx/job_working_directory/000/38/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat ==> 50contig_reads_bin_1_fasta2025-04-10 09:13:05Running Basic statistics processor... Contig files: 50contig_reads_bin_1_fasta Calculating N50 and L50... 50contig_reads_bin_1_fasta, N50 = 1357, L50 = 1, auN = 1357.0, Total length = 1357, GC % = 38.76, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads_bin_1_fasta GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/50contig_reads_bin_1_fasta_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-04-10 09:13:06Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-04-10 09:13:06RESULTS: Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/icarus.html Log is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/quast.logFinished: 2025-04-10 09:13:06Elapsed time: 0:00:01.437248NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
echo 50contig_reads_bin_11_fasta && metaquast --labels '50contig_reads_bin_11_fasta' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/39/outputs/dataset_f92d3a11-d39b-4e6e-8753-d594a0290718_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/39/outputs/dataset_f92d3a11-d39b-4e6e-8753-d594a0290718_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/39/outputs/dataset_f92d3a11-d39b-4e6e-8753-d594a0290718_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads_bin_11_fasta/usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_11_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:04Logging to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat ==> 50contig_reads_bin_11_fastaNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat -o /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir --labels 50contig_reads_bin_11_fastaVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:05Logging to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmprt3848zx/job_working_directory/000/39/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat ==> 50contig_reads_bin_11_fasta2025-04-10 09:13:05Running Basic statistics processor... Contig files: 50contig_reads_bin_11_fasta Calculating N50 and L50... 50contig_reads_bin_11_fasta, N50 = 2275, L50 = 1, auN = 2275.0, Total length = 2275, GC % = 38.42, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads_bin_11_fasta GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/50contig_reads_bin_11_fasta_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-04-10 09:13:06Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-04-10 09:13:06RESULTS: Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/icarus.html Log is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/quast.logFinished: 2025-04-10 09:13:06Elapsed time: 0:00:01.454717NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
echo 50contig_reads_bin_55_fasta && metaquast --labels '50contig_reads_bin_55_fasta' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/40/outputs/dataset_ed59b0db-0841-4df9-8461-61a1bad95807_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/40/outputs/dataset_ed59b0db-0841-4df9-8461-61a1bad95807_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/40/outputs/dataset_ed59b0db-0841-4df9-8461-61a1bad95807_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads_bin_55_fasta/usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_55_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:04Logging to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat ==> 50contig_reads_bin_55_fastaNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat -o /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir --labels 50contig_reads_bin_55_fastaVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:05Logging to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmprt3848zx/job_working_directory/000/40/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat ==> 50contig_reads_bin_55_fasta2025-04-10 09:13:05Running Basic statistics processor... Contig files: 50contig_reads_bin_55_fasta Calculating N50 and L50... 50contig_reads_bin_55_fasta, N50 = 5014, L50 = 21, auN = 5279.4, Total length = 258860, GC % = 36.41, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads_bin_55_fasta GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/50contig_reads_bin_55_fasta_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-04-10 09:13:06Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-04-10 09:13:06RESULTS: Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/icarus.html Log is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/quast.logFinished: 2025-04-10 09:13:06Elapsed time: 0:00:01.462968NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
echo 50contig_reads_bin_6_fasta && metaquast --labels '50contig_reads_bin_6_fasta' -o 'outputdir' --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds '0,1000' --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 '/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat' --threads ${GALAXY_SLOTS:-1} && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/41/outputs/dataset_5e8042cf-3f2b-42bf-b28e-786730d1814f_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/41/outputs/dataset_5e8042cf-3f2b-42bf-b28e-786730d1814f_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/41/outputs/dataset_5e8042cf-3f2b-42bf-b28e-786730d1814f_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi
Exit Code:
0
Standard Output:
50contig_reads_bin_6_fasta/usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_6_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat --threads 1Version: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:04Logging to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/metaquast.logWARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedINFO generated new fontManagerINFO generated new fontManagerContigs: Pre-processing... /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat ==> 50contig_reads_bin_6_fastaNOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabledNOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder/usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat -o /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir --labels 50contig_reads_bin_6_fastaVersion: 5.3.0System information: OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64) Python version: 3.12.3 CPUs number: 4Started: 2025-04-10 09:13:05Logging to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/quast.logNOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specifiedCWD: /tmp/tmprt3848zx/job_working_directory/000/41/workingMain parameters: MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \ ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000Contigs: Pre-processing... /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat ==> 50contig_reads_bin_6_fasta2025-04-10 09:13:05Running Basic statistics processor... Contig files: 50contig_reads_bin_6_fasta Calculating N50 and L50... 50contig_reads_bin_6_fasta, N50 = 1469, L50 = 1, auN = 1469.0, Total length = 1469, GC % = 38.67, # N's per 100 kbp = 0.00 Drawing Nx plot... saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/Nx_plot.pdf Drawing cumulative plot... saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/cumulative_plot.pdf Drawing GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/GC_content_plot.pdf Drawing 50contig_reads_bin_6_fasta GC content plot... saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/50contig_reads_bin_6_fasta_GC_content_plot.pdfDone.NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.2025-04-10 09:13:06Creating large visual summaries...This may take a while: press Ctrl-C to skip this step.. 1 of 2: Creating PDF with all tables and plots... 2 of 2: Creating Icarus viewers...Done2025-04-10 09:13:06RESULTS: Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/report.txt, report.tsv, and report.tex Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/report.html PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/report.pdf Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/icarus.html Log is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/quast.logFinished: 2025-04-10 09:13:06Elapsed time: 0:00:01.445620NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0Thank you for using QUAST!
Unexpected HTTP status code: 400: {"err_msg":"Workflow cannot be run because input step '41' (CheckM2 Database) is not optional and no input provided.","err_code":0}
what needs to be done to finish: Check workflow success Expected — Waiting for status to be reported -sorry for the rush, it would be great to have this merged for project reporting
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
FOR CONTRIBUTOR:
I have difficulties writing tests for this workflow. Ideally I would like to write a test that checks the created bins, but bin IDs (so the name of the fasta files in dereplicated_genomes) are randomly assigned, so the names are different every time I run the workflow. Is it possible to check e.g. the number of elements on the collection, or some other loose test. I could not find any details for test docs.
If not, is it ok to check only the mutliQC report, if any of the workflow steps does not work, the report will change significantly and the test fail.