Skip to content

MAGs building workflow #769

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 20 commits into from
Apr 29, 2025
Merged

MAGs building workflow #769

merged 20 commits into from
Apr 29, 2025

Conversation

paulzierep
Copy link
Contributor

@paulzierep paulzierep commented Mar 26, 2025

FOR CONTRIBUTOR:

  • I have read the Adding workflows guidelines
  • License permits unrestricted use (educational + commercial)
  • Please also take note of the reviewer guidelines below to facilitate a smooth review process.

I have difficulties writing tests for this workflow. Ideally I would like to write a test that checks the created bins, but bin IDs (so the name of the fasta files in dereplicated_genomes) are randomly assigned, so the names are different every time I run the workflow. Is it possible to check e.g. the number of elements on the collection, or some other loose test. I could not find any details for test docs.
If not, is it ok to check only the mutliQC report, if any of the workflow steps does not work, the report will change significantly and the test fail.

Copy link

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ Metagenome-Assembled-Genomes-(MAGs)-generation.ga_0

    Execution Problem:

    • Final state of invocation 6ddd0e18cc4f60be is [failed]. Failed to run workflow, at least one job is in [error] state.
      

    Workflow invocation details

    • Invocation Messages

      • Invocation scheduling failed because step 26 requires a dataset collection created by step 23, but dataset collection entered a failed state.
    • Steps
      • Step 1: Choose Assembler:

        • step_state: scheduled
      • Step 2: Minimum length of contigs to output:

        • step_state: scheduled
      • Step 11: ANI threshold for dereplication:

        • step_state: scheduled
      • Step 12: Run Bakta on MAGs:

        • step_state: scheduled
      • Step 13: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "MEGAHIT", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 14: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "metaSPAdes", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 15: __UNZIP_COLLECTION__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              input {"values": [{"id": 1, "src": "dce"}]}
      • Step 16: toolshed.g2.bx.psu.edu/repos/iuc/megahit/megahit/1.2.9+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • if [[ -n "$GALAXY_MEMORY_MB" ]]; then MEMORY="-m $((GALAXY_MEMORY_MB * 1024))"; fi;  megahit --num-cpu-threads ${GALAXY_SLOTS:-4}  -1 '/tmp/tmpunau54_g/files/6/1/8/dataset_618900d2-82f4-478e-b4de-59e697c3bf37.dat' -2 '/tmp/tmpunau54_g/files/7/3/9/dataset_739b969b-6d3d-4163-8939-56482ad4c124.dat' --min-count '2' --k-list '21,29,39,59,79,99,119,141'  --bubble-level '2' --merge-level '20,0.95' --prune-level '2' --prune-depth '2' --disconnect-ratio '0.1' --low-local-ratio '0.2' --cleaning-rounds '5'   --min-contig-len '200' $MEMORY

            Exit Code:

            • 0

            Standard Error:

            • 2025-03-26 13:02:35 - MEGAHIT v1.2.9
              2025-03-26 13:02:35 - Using megahit_core with POPCNT and BMI2 support
              2025-03-26 13:02:35 - Convert reads to binary library
              2025-03-26 13:02:35 - b'INFO  sequence/io/sequence_lib.cpp  :   75 - Lib 0 (/tmp/tmpunau54_g/files/6/1/8/dataset_618900d2-82f4-478e-b4de-59e697c3bf37.dat,/tmp/tmpunau54_g/files/7/3/9/dataset_739b969b-6d3d-4163-8939-56482ad4c124.dat): pe, 18924 reads, 150 max length'
              2025-03-26 13:02:35 - b'INFO  utils/utils.h                 :  152 - Real: 0.0419\tuser: 0.0412\tsys: 0.0020\tmaxrss: 20616'
              2025-03-26 13:02:35 - Start assembly. Number of CPU threads 1 
              2025-03-26 13:02:35 - k list: 21,29,39,59,79,99,119,141 
              2025-03-26 13:02:35 - Memory used: 15090090393
              2025-03-26 13:02:35 - Extract solid (k+1)-mers for k = 21 
              2025-03-26 13:02:36 - Build graph for k = 21 
              2025-03-26 13:02:36 - Assemble contigs from SdBG for k = 21
              2025-03-26 13:02:37 - Local assembly for k = 21
              2025-03-26 13:02:38 - Extract iterative edges from k = 21 to 29 
              2025-03-26 13:02:38 - Build graph for k = 29 
              2025-03-26 13:02:38 - Assemble contigs from SdBG for k = 29
              2025-03-26 13:02:38 - Local assembly for k = 29
              2025-03-26 13:02:39 - Extract iterative edges from k = 29 to 39 
              2025-03-26 13:02:39 - Build graph for k = 39 
              2025-03-26 13:02:39 - Assemble contigs from SdBG for k = 39
              2025-03-26 13:02:40 - Local assembly for k = 39
              2025-03-26 13:02:41 - Extract iterative edges from k = 39 to 59 
              2025-03-26 13:02:41 - Build graph for k = 59 
              2025-03-26 13:02:41 - Assemble contigs from SdBG for k = 59
              2025-03-26 13:02:41 - Local assembly for k = 59
              2025-03-26 13:02:42 - Extract iterative edges from k = 59 to 79 
              2025-03-26 13:02:42 - Build graph for k = 79 
              2025-03-26 13:02:42 - Assemble contigs from SdBG for k = 79
              2025-03-26 13:02:42 - Local assembly for k = 79
              2025-03-26 13:02:43 - Extract iterative edges from k = 79 to 99 
              2025-03-26 13:02:43 - Build graph for k = 99 
              2025-03-26 13:02:43 - Assemble contigs from SdBG for k = 99
              2025-03-26 13:02:43 - Local assembly for k = 99
              2025-03-26 13:02:43 - Extract iterative edges from k = 99 to 119 
              2025-03-26 13:02:43 - Build graph for k = 119 
              2025-03-26 13:02:44 - Assemble contigs from SdBG for k = 119
              2025-03-26 13:02:44 - Local assembly for k = 119
              2025-03-26 13:02:44 - Extract iterative edges from k = 119 to 141 
              2025-03-26 13:02:44 - Build graph for k = 141 
              2025-03-26 13:02:44 - Assemble contigs from SdBG for k = 141
              2025-03-26 13:02:45 - Merging to output final contigs 
              2025-03-26 13:02:45 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp
              2025-03-26 13:02:45 - ALL DONE. Time elapsed: 9.625468 seconds 
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "fastqsanger.gz"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              advanced_section {"bubble_level": "2", "cleaning_rounds": "5", "disconnect_ratio": "0.1", "kmin1pass": false, "low_local_ratio": "0.2", "merge_level": "20,0.95", "nolocal": false, "nomercy": false, "prune_depth": "2", "prune_level": "2"}
              basic_section {"k_mer": {"__current_case__": 0, "k_list": "21,29,39,59,79,99,119,141", "k_mer_method": "klist_method"}, "min_count": "2"}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"__current_case__": 3, "batchmode": {"__current_case__": 0, "pair_input": {"values": [{"id": 1, "src": "dce"}]}, "processmode": "individual"}, "choice": "paired_collection"}
              output_section {"log_file": false, "min_contig_len": "200", "show_intermediate_contigs": false}
      • Step 17: toolshed.g2.bx.psu.edu/repos/nml/metaspades/metaspades/4.1.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              additional_reads {"__current_case__": 1, "selector": "false"}
              arf {"nanopore": null, "pacbio": null}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              kmer_cond {"__current_case__": 0, "kmer_sel": "auto"}
              library_number "true"
              mode_sel ["--iontorrent"]
              optional_output ["ag", "ags", "cn", "cs"]
              phred_offset "auto"
              singlePaired {"__current_case__": 1, "input": {"values": [{"id": 1, "src": "hdca"}]}, "orientation": "fr", "sPaired": "paired_collection", "type_paired": "pe"}
      • Step 18: toolshed.g2.bx.psu.edu/repos/iuc/pick_value/pick_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              style_cond {"__current_case__": 0, "pick_style": "first", "type_cond": {"__current_case__": 4, "param_type": "data", "pick_from": [{"__index__": 0, "value": {"values": [{"id": 12, "src": "hda"}]}}, {"__index__": 1, "value": {"values": [{"id": 9, "src": "dce"}]}}]}}
      • Step 19: toolshed.g2.bx.psu.edu/repos/iuc/quast/quast/5.3.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo 50contig_reads &&   ln -s '/tmp/tmpunau54_g/files/6/1/8/dataset_618900d2-82f4-478e-b4de-59e697c3bf37.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmpunau54_g/files/7/3/9/dataset_739b969b-6d3d-4163-8939-56482ad4c124.dat' 'pe2-50contig_reads.fastqsanger.gz' &&  metaquast  --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmpunau54_g/job_working_directory/000/11/outputs/dataset_9988ba27-427c-42d6-b301-ca3a0bf3f8e1_files' && cp outputdir/combined_reference/*.html '/tmp/tmpunau54_g/job_working_directory/000/11/outputs/dataset_9988ba27-427c-42d6-b301-ca3a0bf3f8e1_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmpunau54_g/job_working_directory/000/11/outputs/dataset_9988ba27-427c-42d6-b301-ca3a0bf3f8e1_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads
              /usr/local/opt/quast-5.3.0/metaquast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --labels 50contig_reads -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-03-26 13:03:39
              
              Logging to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat ==> 50contig_reads
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat -o /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir --labels 50contig_reads
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-03-26 13:03:40
              
              Logging to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmpunau54_g/job_working_directory/000/11/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat ==> 50contig_reads
              
              2025-03-26 13:03:40
              Running Reads analyzer...
              NOTICE: Permission denied accessing /usr/local/lib/python3.12/site-packages/quast_libs/gridss. GRIDSS will be downloaded to home directory /tmp/tmpunau54_g/job_working_directory/000/11/home/.quast
              Downloading gridss (file: gridss-1.4.1.jar)...
               0.0% of 38935087 bytes
               1.0% of 38935087 bytes
               2.0% of 38935087 bytes
               3.0% of 38935087 bytes
               4.0% of 38935087 bytes
               5.0% of 38935087 bytes
               6.0% of 38935087 bytes
               7.0% of 38935087 bytes
               8.0% of 38935087 bytes
               9.0% of 38935087 bytes
               10.0% of 38935087 bytes
               11.0% of 38935087 bytes
               12.0% of 38935087 bytes
               13.0% of 38935087 bytes
               14.0% of 38935087 bytes
               15.0% of 38935087 bytes
               16.0% of 38935087 bytes
               17.0% of 38935087 bytes
               18.0% of 38935087 bytes
               19.0% of 38935087 bytes
               20.0% of 38935087 bytes
               21.0% of 38935087 bytes
               22.0% of 38935087 bytes
               23.0% of 38935087 bytes
               24.0% of 38935087 bytes
               25.0% of 38935087 bytes
               26.0% of 38935087 bytes
               27.0% of 38935087 bytes
               28.0% of 38935087 bytes
               29.0% of 38935087 bytes
               30.0% of 38935087 bytes
               31.0% of 38935087 bytes
               32.0% of 38935087 bytes
               33.0% of 38935087 bytes
               34.0% of 38935087 bytes
               35.0% of 38935087 bytes
               36.0% of 38935087 bytes
               37.0% of 38935087 bytes
               38.0% of 38935087 bytes
               39.0% of 38935087 bytes
               40.0% of 38935087 bytes
               41.0% of 38935087 bytes
               42.0% of 38935087 bytes
               43.0% of 38935087 bytes
               44.0% of 38935087 bytes
               45.0% of 38935087 bytes
               46.0% of 38935087 bytes
               47.0% of 38935087 bytes
               48.0% of 38935087 bytes
               49.0% of 38935087 bytes
               50.0% of 38935087 bytes
               51.0% of 38935087 bytes
               52.0% of 38935087 bytes
               53.0% of 38935087 bytes
               54.0% of 38935087 bytes
               55.0% of 38935087 bytes
               56.0% of 38935087 bytes
               57.0% of 38935087 bytes
               58.0% of 38935087 bytes
               59.0% of 38935087 bytes
               60.0% of 38935087 bytes
               61.0% of 38935087 bytes
               62.0% of 38935087 bytes
               63.0% of 38935087 bytes
               64.0% of 38935087 bytes
               65.0% of 38935087 bytes
               66.0% of 38935087 bytes
               67.0% of 38935087 bytes
               68.0% of 38935087 bytes
               69.0% of 38935087 bytes
               70.0% of 38935087 bytes
               71.0% of 38935087 bytes
               72.0% of 38935087 bytes
               73.0% of 38935087 bytes
               74.0% of 38935087 bytes
               75.0% of 38935087 bytes
               76.0% of 38935087 bytes
               77.0% of 38935087 bytes
               78.0% of 38935087 bytes
               79.0% of 38935087 bytes
               80.0% of 38935087 bytes
               81.0% of 38935087 bytes
               82.0% of 38935087 bytes
               83.0% of 38935087 bytes
               84.0% of 38935087 bytes
               85.0% of 38935087 bytes
               86.0% of 38935087 bytes
               87.0% of 38935087 bytes
               88.0% of 38935087 bytes
               88.0% of 38935087 bytes
               89.0% of 38935087 bytes
               90.0% of 38935087 bytes
               91.0% of 38935087 bytes
               92.0% of 38935087 bytes
               93.0% of 38935087 bytes
               94.0% of 38935087 bytes
               95.0% of 38935087 bytes
               96.0% of 38935087 bytes
               97.0% of 38935087 bytes
               98.0% of 38935087 bytes
               99.0% of 38935087 bytes
              gridss successfully downloaded!
                Logging to files /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.log and /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.err...
                Pre-processing reads...
                Running BWA...
                Done.
                Sorting SAM-file...
                Analysis is finished.
                Creating total report...
                  saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/reads_stats/reads_report.txt, reads_report.tsv, and reads_report.tex
              Done.
              
              2025-03-26 13:03:45
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads
                Calculating N50 and L50...
                  50contig_reads, N50 = 4877, L50 = 22, auN = 5187.8, Total length = 265405, GC % = 36.45, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads GC content plot...
                  saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/basic_stats/50contig_reads_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-03-26 13:03:46
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-03-26 13:03:47
              RESULTS:
                Text versions of total report are saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/icarus.html
                Log is saved to /tmp/tmpunau54_g/job_working_directory/000/11/working/outputdir/quast.log
              
              Finished: 2025-03-26 13:03:47
              Elapsed time: 0:00:06.934427
              NOTICEs: 3; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 10, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 2, "input_1": {"values": [{"id": 7, "src": "dce"}]}, "input_2": {"values": [{"id": 8, "src": "dce"}]}, "reads_option": "paired"}}
              output_files ["html", "pdf", "tabular", "log", "summary", "krona"]
              split_scaffolds false
      • Step 20: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.3+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat' genome && ln -s -f '/tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat' genome.fa &&   ln -f -s '/tmp/tmpunau54_g/files/6/1/8/dataset_618900d2-82f4-478e-b4de-59e697c3bf37.dat' input_f.fastq.gz &&  ln -f -s '/tmp/tmpunau54_g/files/7/3/9/dataset_739b969b-6d3d-4163-8939-56482ad4c124.dat' input_r.fastq.gz &&    THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -1 'input_f.fastq.gz' -2 'input_r.fastq.gz'                2> >(tee '/tmp/tmpunau54_g/job_working_directory/000/12/outputs/dataset_fcc635a9-6816-4975-a389-52a4c4009289.dat' >&2)  | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmpunau54_g/job_working_directory/000/12/outputs/dataset_0a5b7195-0d10-4a74-ba58-5e7908a309b0.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              9462 reads; of these:
                9462 (100.00%) were paired; of these:
                  90 (0.95%) aligned concordantly 0 times
                  9300 (98.29%) aligned concordantly exactly 1 time
                  72 (0.76%) aligned concordantly >1 times
                  ----
                  90 pairs aligned concordantly 0 times; of these:
                    8 (8.89%) aligned discordantly 1 time
                  ----
                  82 pairs aligned 0 times concordantly or discordantly; of these:
                    164 mates make up the pairs; of these:
                      93 (56.71%) aligned 0 times
                      70 (42.68%) aligned exactly 1 time
                      1 (0.61%) aligned >1 times
              99.51% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 6; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 38016.9 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 18323 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 18324 for bucket 1
              Getting block 2 of 7
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 49606 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49607 for bucket 2
              Getting block 3 of 7
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 45151 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 45152 for bucket 3
              Getting block 4 of 7
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 49787 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49788 for bucket 4
              Getting block 5 of 7
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 28638 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 28639 for bucket 5
              Getting block 6 of 7
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 43194 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 43195 for bucket 6
              Getting block 7 of 7
                Reserving size (49899) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 31419 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 31420 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 44353.2 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 6
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 47687 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47688 for bucket 1
              Getting block 2 of 6
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 36636 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 36637 for bucket 2
              Getting block 3 of 6
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 49027 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49028 for bucket 3
              Getting block 4 of 6
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 37449 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 37450 for bucket 4
              Getting block 5 of 6
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 47142 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47143 for bucket 5
              Getting block 6 of 6
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 48178 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 48179 for bucket 6
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:01
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              analysis_type {"__current_case__": 0, "analysis_type_selector": "simple", "presets": "no_presets"}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 2, "aligned_file": false, "input_1": {"values": [{"id": 1, "src": "dce"}]}, "paired_options": {"__current_case__": 1, "paired_options_selector": "no"}, "type": "paired_collection", "unaligned_file": false}
              own_file __identifier__
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 10, "src": "dce"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats true
      • Step 3: Read length (CONCOCT):

        • step_state: scheduled
      • Step 21: toolshed.g2.bx.psu.edu/repos/iuc/concoct_cut_up_fasta/concoct_cut_up_fasta/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpunau54_g/files/a/f/8/dataset_af8ab189-efe4-4cd3-a676-5e305dffe743.dat' 'input.fa' &&  cut_up_fasta.py 'input.fa' --chunk_size 10000 --overlap_size 0 --merge_last --bedfile '/tmp/tmpunau54_g/job_working_directory/000/13/outputs/dataset_4432b494-d554-43e7-9fdc-c217e0dce33a.dat' > '/tmp/tmpunau54_g/job_working_directory/000/13/outputs/dataset_9236a430-6f5d-4ac9-abd3-7b0f543e479f.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              bedfile true
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              chunk_size "10000"
              dbkey "?"
              input_fasta __identifier__
              merge_last true
              overlap_size "0"
      • Step 22: toolshed.g2.bx.psu.edu/repos/devteam/samtools_sort/samtools_sort/2.0.5:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&   addmemory=${GALAXY_MEMORY_MB_PER_SLOT:-768} && ((addmemory=addmemory*75/100)) &&  samtools sort -@ $addthreads -m $addmemory"M"   -O bam -T "${TMPDIR:-.}" '/tmp/tmpunau54_g/files/0/a/5/dataset_0a5b7195-0d10-4a74-ba58-5e7908a309b0.dat' > '/tmp/tmpunau54_g/job_working_directory/000/14/outputs/dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input1 __identifier__
              minhash false
              prim_key_cond {"__current_case__": 0, "prim_key_select": ""}
      • Step 23: toolshed.g2.bx.psu.edu/repos/iuc/semibin/semibin/2.0.2+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is error

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              annot {"ml_threshold": null}
              bin {"max_edges": "200", "max_node": "1.0", "minfasta_kbs": "200"}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              extra_output ["data", "coverage"]
              min_len {"__current_case__": 0, "method": "automatic"}
              mode {"__current_case__": 0, "environment": "global", "input_bam": {"values": [{"id": 20, "src": "dce"}]}, "input_fasta": {"values": [{"id": 10, "src": "dce"}]}, "ref": {"__current_case__": 0, "cached_db": "17102022", "select": "cached"}, "select": "single"}
              orf_finder "fast-naive"
              random_seed "0"
              training {"batch_size": "2048", "epoches": "20"}
      • Step 24: toolshed.g2.bx.psu.edu/repos/iuc/concoct_coverage_table/concoct_coverage_table/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is running

            Command Line:

            • mkdir 'mapping' && ln -s '/tmp/tmpunau54_g/files/c/9/c/dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat' 'mapping/_tmp_tmpunau54_g_files_c_9_c_dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat.sorted.bam' && samtools index 'mapping/_tmp_tmpunau54_g_files_c_9_c_dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat.sorted.bam' 'mapping/_tmp_tmpunau54_g_files_c_9_c_dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat.bam.bai' && mv 'mapping/_tmp_tmpunau54_g_files_c_9_c_dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat.sorted.bam' 'mapping/_tmp_tmpunau54_g_files_c_9_c_dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat.bam' && concoct_coverage_table.py '/tmp/tmpunau54_g/files/4/4/3/dataset_4432b494-d554-43e7-9fdc-c217e0dce33a.dat' mapping/*.bam > '/tmp/tmpunau54_g/job_working_directory/000/16/outputs/dataset_a822225a-50d9-4f0b-a221-6c5832e0acda.dat'

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              bedfile __identifier__
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bamfile": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual"}
              mode bamfile
      • Step 25: toolshed.g2.bx.psu.edu/repos/iuc/metabat2_jgi_summarize_bam_contig_depths/metabat2_jgi_summarize_bam_contig_depths/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is queued

            Command Line:

            • jgi_summarize_bam_contig_depths --outputDepth '/tmp/tmpunau54_g/job_working_directory/000/17/outputs/dataset_bea7c80b-39f4-4946-a286-07d18ee8053c.dat' --percentIdentity 97   --minMapQual 0 --weightMapQual 0.0  --maxEdgeBases 75 --shredLength 16000 --shredDepth 5 --minContigLength 1 --minContigDepth 0.0 '/tmp/tmpunau54_g/files/c/9/c/dataset_c9c193af-93a8-4086-8ac0-e3d6801fbe6f.dat'

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              advanced {"includeEdgeBases": false, "maxEdgeBases": "75", "minMapQual": "0", "noIntraDepthVariance": false, "output_paired_contigs": false, "percentIdentity": "97", "showDepth": false, "weightMapQual": "0.0"}
              bam_indiv_input __identifier__
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bam_indiv_input": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual", "use_reference_cond": {"__current_case__": 0, "use_reference": "no"}}
              shredding {"minContigDepth": "0.0", "minContigLength": "1", "shredDepth": "5", "shredLength": "16000"}
      • Step 26: Unlabelled step:

        • step_state: new
      • Step 27: toolshed.g2.bx.psu.edu/repos/iuc/concoct/concoct/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              advanced {"clusters": "400", "iterations": "500", "kmer_length": "4", "length_threshold": "1000", "no_cov_normalization": false, "read_length": "250", "seed": "1", "total_percentage_pca": "90"}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"converge_out": false, "log": false, "no_total_coverage": false}
      • Step 28: toolshed.g2.bx.psu.edu/repos/mbernt/maxbin2/maxbin2/2.2.7+galaxy6:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "expression.json"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              adv {"max_iteration": "50", "min_contig_length": "1000", "prob_threshold": "0.5"}
              assembly {"__current_case__": 0, "inputs": {"__current_case__": 1, "abund": {"values": [{"id": 27, "src": "dce"}]}, "type": "abund"}, "type": "individual"}
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"log": true, "marker": true, "markers": true, "markerset": "107", "plotmarker": true}
      • Step 29: toolshed.g2.bx.psu.edu/repos/iuc/metabat2/metabat2/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              advanced {"base_coverage_depth_cond": {"__current_case__": 1, "abdFile": {"values": [{"id": 27, "src": "dce"}]}, "base_coverage_depth": "yes", "cvExt": null}, "maxEdges": "200", "maxP": "95", "minCV": "1.0", "minCVSum": "1.0", "minContig": "1500", "minS": "60", "noAdd": false, "pTNF": "0", "seed": "0"}
              advanced abdFile
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inFile __identifier__
              out {"extra_outputs": ["lowDepth", "tooShort", "unbinned", "log"], "minClsSize": "200000", "onlyLabel": false, "saveCls": false}
      • Step 30: toolshed.g2.bx.psu.edu/repos/iuc/concoct_merge_cut_up_clustering/concoct_merge_cut_up_clustering/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cutup_clustering_result __identifier__
              dbkey "?"
      • Step 4: Environment for the built-in model (SemiBin):

        • step_state: scheduled
      • Step 31: Unlabelled step:

        • step_state: new
      • Step 32: Unlabelled step:

        • step_state: new
      • Step 33: toolshed.g2.bx.psu.edu/repos/iuc/concoct_extract_fasta_bins/concoct_extract_fasta_bins/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "6e9b82e40a4211f08a57002248aaba0a"
              chromInfo "/tmp/tmpunau54_g/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cluster_file __identifier__
              dbkey "?"
              fasta_file __identifier__
      • Step 34: Unlabelled step:

        • step_state: new
      • Step 35: Unlabelled step:

        • step_state: new
      • Step 36: Unlabelled step:

        • step_state: new
      • Step 37: Pool Bins from all samples:

        • step_state: new
      • Step 38: Unlabelled step:

        • step_state: new
      • Step 39: Unlabelled step:

        • step_state: new
      • Step 40: Unlabelled step:

        • step_state: new
      • Step 5: Trimmed grouped paired reads:

        • step_state: scheduled
      • Step 41: Unlabelled step:

        • step_state: new
      • Step 42: Unlabelled step:

        • step_state: new
      • Step 43: Unlabelled step:

        • step_state: new
      • Step 44: Unlabelled step:

        • step_state: new
      • Step 45: Unlabelled step:

        • step_state: new
      • Step 46: Unlabelled step:

        • step_state: new
      • Step 47: Unlabelled step:

        • step_state: new
      • Step 48: Unlabelled step:

        • step_state: new
      • Step 6: Trimmed sample paired reads:

        • step_state: scheduled
      • Step 7: Contamination weight (Binette):

        • step_state: scheduled
      • Step 8: Minimum MAG completeness percentage:

        • step_state: scheduled
      • Step 9: Maximum MAG contamination percentage:

        • step_state: scheduled
      • **Step 10: Minimum MAG length **:

        • step_state: scheduled
    • Other invocation details
      • error_message

        • Final state of invocation 6ddd0e18cc4f60be is [failed]. Failed to run workflow, at least one job is in [error] state.
      • history_id

        • 6ddd0e18cc4f60be
      • history_state

        • error
      • invocation_id

        • 6ddd0e18cc4f60be
      • invocation_state

        • failed
      • messages

        • [{'dependent_workflow_step_id': 22, 'hdca_id': '6338019c8f0e24d7', 'reason': 'collection_failed', 'workflow_step_id': 25}]
      • workflow_id

        • 6ddd0e18cc4f60be

Copy link

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ Metagenome-Assembled-Genomes-(MAGs)-generation.ga_0

    Execution Problem:

    • Final state of invocation cdf6c67c360f70c6 is [failed]. Failed to run workflow, at least one job is in [error] state.
      

    Workflow invocation details

    • Invocation Messages

      • Invocation scheduling failed because step 26 requires a dataset collection created by step 23, but dataset collection entered a failed state.
    • Steps
      • Step 1: Choose Assembler:

        • step_state: scheduled
      • Step 2: Minimum length of contigs to output:

        • step_state: scheduled
      • Step 11: ANI threshold for dereplication:

        • step_state: scheduled
      • Step 12: Run Bakta on MAGs:

        • step_state: scheduled
      • Step 13: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "MEGAHIT", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 14: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "metaSPAdes", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 15: __UNZIP_COLLECTION__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              input {"values": [{"id": 1, "src": "dce"}]}
      • Step 16: toolshed.g2.bx.psu.edu/repos/iuc/megahit/megahit/1.2.9+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • if [[ -n "$GALAXY_MEMORY_MB" ]]; then MEMORY="-m $((GALAXY_MEMORY_MB * 1024))"; fi;  megahit --num-cpu-threads ${GALAXY_SLOTS:-4}  -1 '/tmp/tmp4ato2skh/files/5/2/3/dataset_523a5bb5-97a6-4c1d-8c5a-154e49bfe9aa.dat' -2 '/tmp/tmp4ato2skh/files/d/a/4/dataset_da42f7ca-f21d-486d-8dbf-76967966324b.dat' --min-count '2' --k-list '21,29,39,59,79,99,119,141'  --bubble-level '2' --merge-level '20,0.95' --prune-level '2' --prune-depth '2' --disconnect-ratio '0.1' --low-local-ratio '0.2' --cleaning-rounds '5'   --min-contig-len '200' $MEMORY

            Exit Code:

            • 0

            Standard Error:

            • 2025-03-26 13:12:53 - MEGAHIT v1.2.9
              2025-03-26 13:12:53 - Using megahit_core with POPCNT and BMI2 support
              2025-03-26 13:12:53 - Convert reads to binary library
              2025-03-26 13:12:53 - b'INFO  sequence/io/sequence_lib.cpp  :   75 - Lib 0 (/tmp/tmp4ato2skh/files/5/2/3/dataset_523a5bb5-97a6-4c1d-8c5a-154e49bfe9aa.dat,/tmp/tmp4ato2skh/files/d/a/4/dataset_da42f7ca-f21d-486d-8dbf-76967966324b.dat): pe, 18924 reads, 150 max length'
              2025-03-26 13:12:53 - b'INFO  utils/utils.h                 :  152 - Real: 0.0488\tuser: 0.0468\tsys: 0.0029\tmaxrss: 20796'
              2025-03-26 13:12:53 - Start assembly. Number of CPU threads 1 
              2025-03-26 13:12:53 - k list: 21,29,39,59,79,99,119,141 
              2025-03-26 13:12:53 - Memory used: 15090086707
              2025-03-26 13:12:53 - Extract solid (k+1)-mers for k = 21 
              2025-03-26 13:12:53 - Build graph for k = 21 
              2025-03-26 13:12:54 - Assemble contigs from SdBG for k = 21
              2025-03-26 13:12:54 - Local assembly for k = 21
              2025-03-26 13:12:55 - Extract iterative edges from k = 21 to 29 
              2025-03-26 13:12:55 - Build graph for k = 29 
              2025-03-26 13:12:55 - Assemble contigs from SdBG for k = 29
              2025-03-26 13:12:56 - Local assembly for k = 29
              2025-03-26 13:12:57 - Extract iterative edges from k = 29 to 39 
              2025-03-26 13:12:57 - Build graph for k = 39 
              2025-03-26 13:12:57 - Assemble contigs from SdBG for k = 39
              2025-03-26 13:12:57 - Local assembly for k = 39
              2025-03-26 13:12:58 - Extract iterative edges from k = 39 to 59 
              2025-03-26 13:12:58 - Build graph for k = 59 
              2025-03-26 13:12:58 - Assemble contigs from SdBG for k = 59
              2025-03-26 13:12:59 - Local assembly for k = 59
              2025-03-26 13:12:59 - Extract iterative edges from k = 59 to 79 
              2025-03-26 13:12:59 - Build graph for k = 79 
              2025-03-26 13:12:59 - Assemble contigs from SdBG for k = 79
              2025-03-26 13:13:00 - Local assembly for k = 79
              2025-03-26 13:13:00 - Extract iterative edges from k = 79 to 99 
              2025-03-26 13:13:00 - Build graph for k = 99 
              2025-03-26 13:13:00 - Assemble contigs from SdBG for k = 99
              2025-03-26 13:13:01 - Local assembly for k = 99
              2025-03-26 13:13:01 - Extract iterative edges from k = 99 to 119 
              2025-03-26 13:13:01 - Build graph for k = 119 
              2025-03-26 13:13:01 - Assemble contigs from SdBG for k = 119
              2025-03-26 13:13:01 - Local assembly for k = 119
              2025-03-26 13:13:02 - Extract iterative edges from k = 119 to 141 
              2025-03-26 13:13:02 - Build graph for k = 141 
              2025-03-26 13:13:02 - Assemble contigs from SdBG for k = 141
              2025-03-26 13:13:02 - Merging to output final contigs 
              2025-03-26 13:13:02 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp
              2025-03-26 13:13:02 - ALL DONE. Time elapsed: 9.585234 seconds 
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "fastqsanger.gz"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              advanced_section {"bubble_level": "2", "cleaning_rounds": "5", "disconnect_ratio": "0.1", "kmin1pass": false, "low_local_ratio": "0.2", "merge_level": "20,0.95", "nolocal": false, "nomercy": false, "prune_depth": "2", "prune_level": "2"}
              basic_section {"k_mer": {"__current_case__": 0, "k_list": "21,29,39,59,79,99,119,141", "k_mer_method": "klist_method"}, "min_count": "2"}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"__current_case__": 3, "batchmode": {"__current_case__": 0, "pair_input": {"values": [{"id": 1, "src": "dce"}]}, "processmode": "individual"}, "choice": "paired_collection"}
              output_section {"log_file": false, "min_contig_len": "200", "show_intermediate_contigs": false}
      • Step 17: toolshed.g2.bx.psu.edu/repos/nml/metaspades/metaspades/4.1.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              additional_reads {"__current_case__": 1, "selector": "false"}
              arf {"nanopore": null, "pacbio": null}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              kmer_cond {"__current_case__": 0, "kmer_sel": "auto"}
              library_number "true"
              mode_sel ["--iontorrent"]
              optional_output ["ag", "ags", "cn", "cs"]
              phred_offset "auto"
              singlePaired {"__current_case__": 1, "input": {"values": [{"id": 1, "src": "hdca"}]}, "orientation": "fr", "sPaired": "paired_collection", "type_paired": "pe"}
      • Step 18: toolshed.g2.bx.psu.edu/repos/iuc/pick_value/pick_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              style_cond {"__current_case__": 0, "pick_style": "first", "type_cond": {"__current_case__": 4, "param_type": "data", "pick_from": [{"__index__": 0, "value": {"values": [{"id": 12, "src": "hda"}]}}, {"__index__": 1, "value": {"values": [{"id": 9, "src": "dce"}]}}]}}
      • Step 19: toolshed.g2.bx.psu.edu/repos/iuc/quast/quast/5.3.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is running

            Command Line:

            • echo 50contig_reads &&   ln -s '/tmp/tmp4ato2skh/files/5/2/3/dataset_523a5bb5-97a6-4c1d-8c5a-154e49bfe9aa.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmp4ato2skh/files/d/a/4/dataset_da42f7ca-f21d-486d-8dbf-76967966324b.dat' 'pe2-50contig_reads.fastqsanger.gz' &&  metaquast  --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmp4ato2skh/files/6/f/3/dataset_6f3f0843-9fa4-4df8-882c-b0cd443710de.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmp4ato2skh/job_working_directory/000/11/outputs/dataset_bc44384c-4af8-45ca-839a-75e8a9426d98_files' && cp outputdir/combined_reference/*.html '/tmp/tmp4ato2skh/job_working_directory/000/11/outputs/dataset_bc44384c-4af8-45ca-839a-75e8a9426d98_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmp4ato2skh/job_working_directory/000/11/outputs/dataset_bc44384c-4af8-45ca-839a-75e8a9426d98_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 10, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 2, "input_1": {"values": [{"id": 7, "src": "dce"}]}, "input_2": {"values": [{"id": 8, "src": "dce"}]}, "reads_option": "paired"}}
              output_files ["html", "pdf", "tabular", "log", "summary", "krona"]
              split_scaffolds false
      • Step 20: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.3+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmp4ato2skh/files/6/f/3/dataset_6f3f0843-9fa4-4df8-882c-b0cd443710de.dat' genome && ln -s -f '/tmp/tmp4ato2skh/files/6/f/3/dataset_6f3f0843-9fa4-4df8-882c-b0cd443710de.dat' genome.fa &&   ln -f -s '/tmp/tmp4ato2skh/files/5/2/3/dataset_523a5bb5-97a6-4c1d-8c5a-154e49bfe9aa.dat' input_f.fastq.gz &&  ln -f -s '/tmp/tmp4ato2skh/files/d/a/4/dataset_da42f7ca-f21d-486d-8dbf-76967966324b.dat' input_r.fastq.gz &&    THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -1 'input_f.fastq.gz' -2 'input_r.fastq.gz'                2> >(tee '/tmp/tmp4ato2skh/job_working_directory/000/12/outputs/dataset_4c6c40d5-98d3-447a-bcba-d4723de04b18.dat' >&2)  | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmp4ato2skh/job_working_directory/000/12/outputs/dataset_f664fa69-585e-47cb-ae50-93abcd51616d.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              9462 reads; of these:
                9462 (100.00%) were paired; of these:
                  90 (0.95%) aligned concordantly 0 times
                  9300 (98.29%) aligned concordantly exactly 1 time
                  72 (0.76%) aligned concordantly >1 times
                  ----
                  90 pairs aligned concordantly 0 times; of these:
                    8 (8.89%) aligned discordantly 1 time
                  ----
                  82 pairs aligned 0 times concordantly or discordantly; of these:
                    164 mates make up the pairs; of these:
                      93 (56.71%) aligned 0 times
                      70 (42.68%) aligned exactly 1 time
                      1 (0.61%) aligned >1 times
              99.51% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmp4ato2skh/files/6/f/3/dataset_6f3f0843-9fa4-4df8-882c-b0cd443710de.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 6; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 38016.9 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 18323 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 18324 for bucket 1
              Getting block 2 of 7
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 49606 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49607 for bucket 2
              Getting block 3 of 7
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 45151 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 45152 for bucket 3
              Getting block 4 of 7
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 49787 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49788 for bucket 4
              Getting block 5 of 7
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 28638 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 28639 for bucket 5
              Getting block 6 of 7
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 43194 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 43195 for bucket 6
              Getting block 7 of 7
                Reserving size (49899) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 31419 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 31420 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 44353.2 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 6
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 47687 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47688 for bucket 1
              Getting block 2 of 6
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 36636 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 36637 for bucket 2
              Getting block 3 of 6
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 49027 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49028 for bucket 3
              Getting block 4 of 6
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 37449 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 37450 for bucket 4
              Getting block 5 of 6
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 47142 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47143 for bucket 5
              Getting block 6 of 6
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 48178 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 48179 for bucket 6
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:00
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              analysis_type {"__current_case__": 0, "analysis_type_selector": "simple", "presets": "no_presets"}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 2, "aligned_file": false, "input_1": {"values": [{"id": 1, "src": "dce"}]}, "paired_options": {"__current_case__": 1, "paired_options_selector": "no"}, "type": "paired_collection", "unaligned_file": false}
              own_file __identifier__
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 10, "src": "dce"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats true
      • Step 3: Read length (CONCOCT):

        • step_state: scheduled
      • Step 21: toolshed.g2.bx.psu.edu/repos/iuc/concoct_cut_up_fasta/concoct_cut_up_fasta/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmp4ato2skh/files/6/f/3/dataset_6f3f0843-9fa4-4df8-882c-b0cd443710de.dat' 'input.fa' &&  cut_up_fasta.py 'input.fa' --chunk_size 10000 --overlap_size 0 --merge_last --bedfile '/tmp/tmp4ato2skh/job_working_directory/000/13/outputs/dataset_be63c849-f2ff-4cf9-ba6d-898e0e102b80.dat' > '/tmp/tmp4ato2skh/job_working_directory/000/13/outputs/dataset_30269942-41c7-4cfe-8c8d-a963165d0e7b.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              bedfile true
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              chunk_size "10000"
              dbkey "?"
              input_fasta __identifier__
              merge_last true
              overlap_size "0"
      • Step 22: toolshed.g2.bx.psu.edu/repos/devteam/samtools_sort/samtools_sort/2.0.5:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&   addmemory=${GALAXY_MEMORY_MB_PER_SLOT:-768} && ((addmemory=addmemory*75/100)) &&  samtools sort -@ $addthreads -m $addmemory"M"   -O bam -T "${TMPDIR:-.}" '/tmp/tmp4ato2skh/files/f/6/6/dataset_f664fa69-585e-47cb-ae50-93abcd51616d.dat' > '/tmp/tmp4ato2skh/job_working_directory/000/14/outputs/dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input1 __identifier__
              minhash false
              prim_key_cond {"__current_case__": 0, "prim_key_select": ""}
      • Step 23: toolshed.g2.bx.psu.edu/repos/iuc/semibin/semibin/2.0.2+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is error

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              annot {"ml_threshold": null}
              bin {"max_edges": "200", "max_node": "1.0", "minfasta_kbs": "200"}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              extra_output ["data", "coverage"]
              min_len {"__current_case__": 0, "method": "automatic"}
              mode {"__current_case__": 0, "environment": "global", "input_bam": {"values": [{"id": 20, "src": "dce"}]}, "input_fasta": {"values": [{"id": 10, "src": "dce"}]}, "ref": {"__current_case__": 0, "cached_db": "17102022", "select": "cached"}, "select": "single"}
              orf_finder "fast-naive"
              random_seed "0"
              training {"batch_size": "2048", "epoches": "20"}
      • Step 24: toolshed.g2.bx.psu.edu/repos/iuc/concoct_coverage_table/concoct_coverage_table/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is running

            Command Line:

            • mkdir 'mapping' && ln -s '/tmp/tmp4ato2skh/files/d/d/8/dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat' 'mapping/_tmp_tmp4ato2skh_files_d_d_8_dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat.sorted.bam' && samtools index 'mapping/_tmp_tmp4ato2skh_files_d_d_8_dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat.sorted.bam' 'mapping/_tmp_tmp4ato2skh_files_d_d_8_dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat.bam.bai' && mv 'mapping/_tmp_tmp4ato2skh_files_d_d_8_dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat.sorted.bam' 'mapping/_tmp_tmp4ato2skh_files_d_d_8_dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat.bam' && concoct_coverage_table.py '/tmp/tmp4ato2skh/files/b/e/6/dataset_be63c849-f2ff-4cf9-ba6d-898e0e102b80.dat' mapping/*.bam > '/tmp/tmp4ato2skh/job_working_directory/000/16/outputs/dataset_fd75bb04-5ea5-4a61-9812-be7370042a4c.dat'

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              bedfile __identifier__
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bamfile": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual"}
              mode bamfile
      • Step 25: toolshed.g2.bx.psu.edu/repos/iuc/metabat2_jgi_summarize_bam_contig_depths/metabat2_jgi_summarize_bam_contig_depths/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is queued

            Command Line:

            • jgi_summarize_bam_contig_depths --outputDepth '/tmp/tmp4ato2skh/job_working_directory/000/17/outputs/dataset_7054bb38-f9a3-4358-b793-46c0eff08a53.dat' --percentIdentity 97   --minMapQual 0 --weightMapQual 0.0  --maxEdgeBases 75 --shredLength 16000 --shredDepth 5 --minContigLength 1 --minContigDepth 0.0 '/tmp/tmp4ato2skh/files/d/d/8/dataset_dd853b06-f0bc-45ba-8065-a11810c2a786.dat'

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              advanced {"includeEdgeBases": false, "maxEdgeBases": "75", "minMapQual": "0", "noIntraDepthVariance": false, "output_paired_contigs": false, "percentIdentity": "97", "showDepth": false, "weightMapQual": "0.0"}
              bam_indiv_input __identifier__
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bam_indiv_input": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual", "use_reference_cond": {"__current_case__": 0, "use_reference": "no"}}
              shredding {"minContigDepth": "0.0", "minContigLength": "1", "shredDepth": "5", "shredLength": "16000"}
      • Step 26: Unlabelled step:

        • step_state: new
      • Step 27: toolshed.g2.bx.psu.edu/repos/iuc/concoct/concoct/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              advanced {"clusters": "400", "iterations": "500", "kmer_length": "4", "length_threshold": "1000", "no_cov_normalization": false, "read_length": "250", "seed": "1", "total_percentage_pca": "90"}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"converge_out": false, "log": false, "no_total_coverage": false}
      • Step 28: toolshed.g2.bx.psu.edu/repos/mbernt/maxbin2/maxbin2/2.2.7+galaxy6:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "expression.json"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              adv {"max_iteration": "50", "min_contig_length": "1000", "prob_threshold": "0.5"}
              assembly {"__current_case__": 0, "inputs": {"__current_case__": 1, "abund": {"values": [{"id": 27, "src": "dce"}]}, "type": "abund"}, "type": "individual"}
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"log": true, "marker": true, "markers": true, "markerset": "107", "plotmarker": true}
      • Step 29: toolshed.g2.bx.psu.edu/repos/iuc/metabat2/metabat2/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              advanced {"base_coverage_depth_cond": {"__current_case__": 1, "abdFile": {"values": [{"id": 27, "src": "dce"}]}, "base_coverage_depth": "yes", "cvExt": null}, "maxEdges": "200", "maxP": "95", "minCV": "1.0", "minCVSum": "1.0", "minContig": "1500", "minS": "60", "noAdd": false, "pTNF": "0", "seed": "0"}
              advanced abdFile
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inFile __identifier__
              out {"extra_outputs": ["lowDepth", "tooShort", "unbinned", "log"], "minClsSize": "200000", "onlyLabel": false, "saveCls": false}
      • Step 30: toolshed.g2.bx.psu.edu/repos/iuc/concoct_merge_cut_up_clustering/concoct_merge_cut_up_clustering/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cutup_clustering_result __identifier__
              dbkey "?"
      • Step 4: Environment for the built-in model (SemiBin):

        • step_state: scheduled
      • Step 31: Unlabelled step:

        • step_state: new
      • Step 32: Unlabelled step:

        • step_state: new
      • Step 33: toolshed.g2.bx.psu.edu/repos/iuc/concoct_extract_fasta_bins/concoct_extract_fasta_bins/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "eff2b0140a4311f08a576045bd7d7463"
              chromInfo "/tmp/tmp4ato2skh/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cluster_file __identifier__
              dbkey "?"
              fasta_file __identifier__
      • Step 34: Unlabelled step:

        • step_state: new
      • Step 35: Unlabelled step:

        • step_state: new
      • Step 36: Unlabelled step:

        • step_state: new
      • Step 37: Pool Bins from all samples:

        • step_state: new
      • Step 38: Unlabelled step:

        • step_state: new
      • Step 39: Unlabelled step:

        • step_state: new
      • Step 40: Unlabelled step:

        • step_state: new
      • Step 5: Trimmed grouped paired reads:

        • step_state: scheduled
      • Step 41: Unlabelled step:

        • step_state: new
      • Step 42: Unlabelled step:

        • step_state: new
      • Step 43: Unlabelled step:

        • step_state: new
      • Step 44: Unlabelled step:

        • step_state: new
      • Step 45: Unlabelled step:

        • step_state: new
      • Step 46: Unlabelled step:

        • step_state: new
      • Step 47: Unlabelled step:

        • step_state: new
      • Step 48: Unlabelled step:

        • step_state: new
      • Step 6: Trimmed sample paired reads:

        • step_state: scheduled
      • Step 7: Contamination weight (Binette):

        • step_state: scheduled
      • Step 8: Minimum MAG completeness percentage:

        • step_state: scheduled
      • Step 9: Maximum MAG contamination percentage:

        • step_state: scheduled
      • **Step 10: Minimum MAG length **:

        • step_state: scheduled
    • Other invocation details
      • error_message

        • Final state of invocation cdf6c67c360f70c6 is [failed]. Failed to run workflow, at least one job is in [error] state.
      • history_id

        • cdf6c67c360f70c6
      • history_state

        • error
      • invocation_id

        • cdf6c67c360f70c6
      • invocation_state

        • failed
      • messages

        • [{'dependent_workflow_step_id': 22, 'hdca_id': '5869b5b3219f70eb', 'reason': 'collection_failed', 'workflow_step_id': 25}]
      • workflow_id

        • cdf6c67c360f70c6

Copy link

github-actions bot commented Apr 8, 2025

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 0
Passed 0
Error 0
Failure 0
Skipped 0

Copy link

github-actions bot commented Apr 8, 2025

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ MAGs-generation.ga_0

    Execution Problem:

    • Unexpected HTTP status code: 500: Internal Server Error
      

Copy link

github-actions bot commented Apr 8, 2025

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ MAGs-generation.ga_0

    Execution Problem:

    • Final state of invocation 1ed2e242fb805563 is [failed]. Failed to run workflow, at least one job is in [error] state.
      

    Workflow invocation details

    • Invocation Messages

      • Invocation scheduling failed because step 55 requires a dataset collection created by step 48, but dataset collection entered a failed state.
    • Steps
      • Step 1: Choose Assembler:

        • step_state: scheduled
      • Step 2: Custom Assemblies:

        • step_state: scheduled
      • Step 11: Maximum MAG contamination percentage:

        • step_state: scheduled
      • Step 12: Minimum MAG length:

        • step_state: scheduled
      • Step 13: CheckM2 Database (dRep step):

        • step_state: scheduled
      • Step 14: ANI threshold for dereplication:

        • step_state: scheduled
      • Step 15: CheckM2 Database (Bin step):

        • step_state: scheduled
      • Step 16: GTDB-tk Database:

        • step_state: scheduled
      • Step 17: Bakta Database:

        • step_state: scheduled
      • Step 18: AMRFinderPlus Database for Bakta:

        • step_state: scheduled
      • Step 19: Run Bakta on MAGs:

        • step_state: scheduled
      • Step 20: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "MEGAHIT", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 3: Minimum length of contigs to output:

        • step_state: scheduled
      • Step 21: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "metaSPAdes", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 22: __UNZIP_COLLECTION__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              input {"values": [{"id": 1, "src": "dce"}]}
      • Step 23: toolshed.g2.bx.psu.edu/repos/iuc/megahit/megahit/1.2.9+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • if [[ -n "$GALAXY_MEMORY_MB" ]]; then MEMORY="-m $((GALAXY_MEMORY_MB * 1024))"; fi;  megahit --num-cpu-threads ${GALAXY_SLOTS:-4}  -1 '/tmp/tmpbo76svqj/files/b/2/7/dataset_b27ae4d4-9ab0-4bc6-b3d2-325dc79aa2b6.dat' -2 '/tmp/tmpbo76svqj/files/1/3/f/dataset_13fc15e4-5959-4e47-a36d-075c5894ac88.dat' --min-count '2' --k-list '21,29,39,59,79,99,119,141'  --bubble-level '2' --merge-level '20,0.95' --prune-level '2' --prune-depth '2' --disconnect-ratio '0.1' --low-local-ratio '0.2' --cleaning-rounds '5'   --min-contig-len '200' $MEMORY

            Exit Code:

            • 0

            Standard Error:

            • 2025-04-08 14:05:03 - MEGAHIT v1.2.9
              2025-04-08 14:05:03 - Using megahit_core with POPCNT and BMI2 support
              2025-04-08 14:05:03 - Convert reads to binary library
              2025-04-08 14:05:03 - b'INFO  sequence/io/sequence_lib.cpp  :   75 - Lib 0 (/tmp/tmpbo76svqj/files/b/2/7/dataset_b27ae4d4-9ab0-4bc6-b3d2-325dc79aa2b6.dat,/tmp/tmpbo76svqj/files/1/3/f/dataset_13fc15e4-5959-4e47-a36d-075c5894ac88.dat): pe, 18924 reads, 150 max length'
              2025-04-08 14:05:03 - b'INFO  utils/utils.h                 :  152 - Real: 0.0460\tuser: 0.0401\tsys: 0.0070\tmaxrss: 20616'
              2025-04-08 14:05:03 - Start assembly. Number of CPU threads 1 
              2025-04-08 14:05:03 - k list: 21,29,39,59,79,99,119,141 
              2025-04-08 14:05:03 - Memory used: 15090086707
              2025-04-08 14:05:03 - Extract solid (k+1)-mers for k = 21 
              2025-04-08 14:05:04 - Build graph for k = 21 
              2025-04-08 14:05:04 - Assemble contigs from SdBG for k = 21
              2025-04-08 14:05:05 - Local assembly for k = 21
              2025-04-08 14:05:06 - Extract iterative edges from k = 21 to 29 
              2025-04-08 14:05:06 - Build graph for k = 29 
              2025-04-08 14:05:06 - Assemble contigs from SdBG for k = 29
              2025-04-08 14:05:06 - Local assembly for k = 29
              2025-04-08 14:05:07 - Extract iterative edges from k = 29 to 39 
              2025-04-08 14:05:07 - Build graph for k = 39 
              2025-04-08 14:05:07 - Assemble contigs from SdBG for k = 39
              2025-04-08 14:05:08 - Local assembly for k = 39
              2025-04-08 14:05:09 - Extract iterative edges from k = 39 to 59 
              2025-04-08 14:05:09 - Build graph for k = 59 
              2025-04-08 14:05:09 - Assemble contigs from SdBG for k = 59
              2025-04-08 14:05:09 - Local assembly for k = 59
              2025-04-08 14:05:10 - Extract iterative edges from k = 59 to 79 
              2025-04-08 14:05:10 - Build graph for k = 79 
              2025-04-08 14:05:10 - Assemble contigs from SdBG for k = 79
              2025-04-08 14:05:10 - Local assembly for k = 79
              2025-04-08 14:05:11 - Extract iterative edges from k = 79 to 99 
              2025-04-08 14:05:11 - Build graph for k = 99 
              2025-04-08 14:05:11 - Assemble contigs from SdBG for k = 99
              2025-04-08 14:05:11 - Local assembly for k = 99
              2025-04-08 14:05:11 - Extract iterative edges from k = 99 to 119 
              2025-04-08 14:05:11 - Build graph for k = 119 
              2025-04-08 14:05:12 - Assemble contigs from SdBG for k = 119
              2025-04-08 14:05:12 - Local assembly for k = 119
              2025-04-08 14:05:12 - Extract iterative edges from k = 119 to 141 
              2025-04-08 14:05:12 - Build graph for k = 141 
              2025-04-08 14:05:12 - Assemble contigs from SdBG for k = 141
              2025-04-08 14:05:13 - Merging to output final contigs 
              2025-04-08 14:05:13 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp
              2025-04-08 14:05:13 - ALL DONE. Time elapsed: 9.814435 seconds 
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "fastqsanger.gz"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced_section {"bubble_level": "2", "cleaning_rounds": "5", "disconnect_ratio": "0.1", "kmin1pass": false, "low_local_ratio": "0.2", "merge_level": "20,0.95", "nolocal": false, "nomercy": false, "prune_depth": "2", "prune_level": "2"}
              basic_section {"k_mer": {"__current_case__": 0, "k_list": "21,29,39,59,79,99,119,141", "k_mer_method": "klist_method"}, "min_count": "2"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"__current_case__": 3, "batchmode": {"__current_case__": 0, "pair_input": {"values": [{"id": 1, "src": "dce"}]}, "processmode": "individual"}, "choice": "paired_collection"}
              output_section {"log_file": false, "min_contig_len": "200", "show_intermediate_contigs": false}
      • Step 24: toolshed.g2.bx.psu.edu/repos/nml/metaspades/metaspades/4.1.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              additional_reads {"__current_case__": 1, "selector": "false"}
              arf {"nanopore": null, "pacbio": null}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              kmer_cond {"__current_case__": 0, "kmer_sel": "auto"}
              library_number "true"
              mode_sel None
              optional_output ["ag", "ags", "cn", "cs"]
              phred_offset "auto"
              singlePaired {"__current_case__": 1, "input": {"values": [{"id": 1, "src": "hdca"}]}, "orientation": "fr", "sPaired": "paired_collection", "type_paired": "pe"}
      • Step 25: toolshed.g2.bx.psu.edu/repos/iuc/pick_value/pick_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              style_cond {"__current_case__": 0, "pick_style": "first", "type_cond": {"__current_case__": 4, "param_type": "data", "pick_from": [{"__index__": 0, "value": {"values": [{"id": 12, "src": "hda"}]}}, {"__index__": 1, "value": {"values": [{"id": 9, "src": "dce"}]}}, {"__index__": 2, "value": null}]}}
      • Step 26: toolshed.g2.bx.psu.edu/repos/iuc/quast/quast/5.3.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo 50contig_reads &&   ln -s '/tmp/tmpbo76svqj/files/b/2/7/dataset_b27ae4d4-9ab0-4bc6-b3d2-325dc79aa2b6.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmpbo76svqj/files/1/3/f/dataset_13fc15e4-5959-4e47-a36d-075c5894ac88.dat' 'pe2-50contig_reads.fastqsanger.gz' &&  metaquast  --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmpbo76svqj/job_working_directory/000/11/outputs/dataset_ee0c24d5-377a-4588-8297-6744fc1cabb7_files' && cp outputdir/combined_reference/*.html '/tmp/tmpbo76svqj/job_working_directory/000/11/outputs/dataset_ee0c24d5-377a-4588-8297-6744fc1cabb7_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmpbo76svqj/job_working_directory/000/11/outputs/dataset_ee0c24d5-377a-4588-8297-6744fc1cabb7_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads
              /usr/local/opt/quast-5.3.0/metaquast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --labels 50contig_reads -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-08 14:06:14
              
              Logging to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat ==> 50contig_reads
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat -o /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir --labels 50contig_reads
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-08 14:06:15
              
              Logging to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmpbo76svqj/job_working_directory/000/11/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat ==> 50contig_reads
              
              2025-04-08 14:06:15
              Running Reads analyzer...
              NOTICE: Permission denied accessing /usr/local/lib/python3.12/site-packages/quast_libs/gridss. GRIDSS will be downloaded to home directory /tmp/tmpbo76svqj/job_working_directory/000/11/home/.quast
              Downloading gridss (file: gridss-1.4.1.jar)...
               0.0% of 38935087 bytes
               1.0% of 38935087 bytes
               2.0% of 38935087 bytes
               3.0% of 38935087 bytes
               4.0% of 38935087 bytes
               5.0% of 38935087 bytes
               6.0% of 38935087 bytes
               7.0% of 38935087 bytes
               8.0% of 38935087 bytes
               9.0% of 38935087 bytes
               10.0% of 38935087 bytes
               11.0% of 38935087 bytes
               12.0% of 38935087 bytes
               13.0% of 38935087 bytes
               14.0% of 38935087 bytes
               15.0% of 38935087 bytes
               16.0% of 38935087 bytes
               17.0% of 38935087 bytes
               18.0% of 38935087 bytes
               19.0% of 38935087 bytes
               20.0% of 38935087 bytes
               21.0% of 38935087 bytes
               22.0% of 38935087 bytes
               23.0% of 38935087 bytes
               24.0% of 38935087 bytes
               25.0% of 38935087 bytes
               26.0% of 38935087 bytes
               27.0% of 38935087 bytes
               28.0% of 38935087 bytes
               29.0% of 38935087 bytes
               30.0% of 38935087 bytes
               31.0% of 38935087 bytes
               32.0% of 38935087 bytes
               33.0% of 38935087 bytes
               34.0% of 38935087 bytes
               35.0% of 38935087 bytes
               36.0% of 38935087 bytes
               37.0% of 38935087 bytes
               38.0% of 38935087 bytes
               39.0% of 38935087 bytes
               40.0% of 38935087 bytes
               41.0% of 38935087 bytes
               42.0% of 38935087 bytes
               43.0% of 38935087 bytes
               44.0% of 38935087 bytes
               45.0% of 38935087 bytes
               46.0% of 38935087 bytes
               47.0% of 38935087 bytes
               48.0% of 38935087 bytes
               49.0% of 38935087 bytes
               50.0% of 38935087 bytes
               51.0% of 38935087 bytes
               52.0% of 38935087 bytes
               53.0% of 38935087 bytes
               54.0% of 38935087 bytes
               55.0% of 38935087 bytes
               56.0% of 38935087 bytes
               57.0% of 38935087 bytes
               58.0% of 38935087 bytes
               59.0% of 38935087 bytes
               60.0% of 38935087 bytes
               61.0% of 38935087 bytes
               62.0% of 38935087 bytes
               63.0% of 38935087 bytes
               64.0% of 38935087 bytes
               65.0% of 38935087 bytes
               66.0% of 38935087 bytes
               67.0% of 38935087 bytes
               68.0% of 38935087 bytes
               69.0% of 38935087 bytes
               70.0% of 38935087 bytes
               71.0% of 38935087 bytes
               72.0% of 38935087 bytes
               73.0% of 38935087 bytes
               74.0% of 38935087 bytes
               75.0% of 38935087 bytes
               76.0% of 38935087 bytes
               77.0% of 38935087 bytes
               78.0% of 38935087 bytes
               79.0% of 38935087 bytes
               80.0% of 38935087 bytes
               81.0% of 38935087 bytes
               82.0% of 38935087 bytes
               83.0% of 38935087 bytes
               84.0% of 38935087 bytes
               85.0% of 38935087 bytes
               86.0% of 38935087 bytes
               87.0% of 38935087 bytes
               88.0% of 38935087 bytes
               88.0% of 38935087 bytes
               89.0% of 38935087 bytes
               90.0% of 38935087 bytes
               91.0% of 38935087 bytes
               92.0% of 38935087 bytes
               93.0% of 38935087 bytes
               94.0% of 38935087 bytes
               95.0% of 38935087 bytes
               96.0% of 38935087 bytes
               97.0% of 38935087 bytes
               98.0% of 38935087 bytes
               99.0% of 38935087 bytes
              gridss successfully downloaded!
                Logging to files /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.log and /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.err...
                Pre-processing reads...
                Running BWA...
                Done.
                Sorting SAM-file...
                Analysis is finished.
                Creating total report...
                  saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/reads_stats/reads_report.txt, reads_report.tsv, and reads_report.tex
              Done.
              
              2025-04-08 14:06:20
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads
                Calculating N50 and L50...
                  50contig_reads, N50 = 4877, L50 = 22, auN = 5187.8, Total length = 265405, GC % = 36.45, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads GC content plot...
                  saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/basic_stats/50contig_reads_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-04-08 14:06:21
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-04-08 14:06:21
              RESULTS:
                Text versions of total report are saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/icarus.html
                Log is saved to /tmp/tmpbo76svqj/job_working_directory/000/11/working/outputdir/quast.log
              
              Finished: 2025-04-08 14:06:21
              Elapsed time: 0:00:06.436382
              NOTICEs: 3; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 10, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 2, "input_1": {"values": [{"id": 7, "src": "dce"}]}, "input_2": {"values": [{"id": 8, "src": "dce"}]}, "reads_option": "paired"}}
              output_files ["html", "pdf", "tabular", "log", "summary", "krona"]
              split_scaffolds false
      • Step 27: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.3+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' genome && ln -s -f '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' genome.fa &&   ln -f -s '/tmp/tmpbo76svqj/files/b/2/7/dataset_b27ae4d4-9ab0-4bc6-b3d2-325dc79aa2b6.dat' input_f.fastq.gz &&  ln -f -s '/tmp/tmpbo76svqj/files/1/3/f/dataset_13fc15e4-5959-4e47-a36d-075c5894ac88.dat' input_r.fastq.gz &&    THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -1 'input_f.fastq.gz' -2 'input_r.fastq.gz'                2> >(tee '/tmp/tmpbo76svqj/job_working_directory/000/12/outputs/dataset_363c5fba-1d6a-469b-bc82-93a15630d0a4.dat' >&2)  | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmpbo76svqj/job_working_directory/000/12/outputs/dataset_9ed09d4e-2d74-46b0-88c7-c9d362d8c801.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              9462 reads; of these:
                9462 (100.00%) were paired; of these:
                  90 (0.95%) aligned concordantly 0 times
                  9300 (98.29%) aligned concordantly exactly 1 time
                  72 (0.76%) aligned concordantly >1 times
                  ----
                  90 pairs aligned concordantly 0 times; of these:
                    8 (8.89%) aligned discordantly 1 time
                  ----
                  82 pairs aligned 0 times concordantly or discordantly; of these:
                    164 mates make up the pairs; of these:
                      93 (56.71%) aligned 0 times
                      70 (42.68%) aligned exactly 1 time
                      1 (0.61%) aligned >1 times
              99.51% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 6; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 38016.9 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 18323 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 18324 for bucket 1
              Getting block 2 of 7
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 49606 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49607 for bucket 2
              Getting block 3 of 7
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 45151 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 45152 for bucket 3
              Getting block 4 of 7
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 49787 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49788 for bucket 4
              Getting block 5 of 7
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 28638 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 28639 for bucket 5
              Getting block 6 of 7
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 43194 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 43195 for bucket 6
              Getting block 7 of 7
                Reserving size (49899) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 31419 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 31420 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 44353.2 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 6
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 47687 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47688 for bucket 1
              Getting block 2 of 6
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 36636 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 36637 for bucket 2
              Getting block 3 of 6
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 49027 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49028 for bucket 3
              Getting block 4 of 6
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 37449 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 37450 for bucket 4
              Getting block 5 of 6
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 47142 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47143 for bucket 5
              Getting block 6 of 6
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 48178 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 48179 for bucket 6
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:00
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              analysis_type {"__current_case__": 0, "analysis_type_selector": "simple", "presets": "no_presets"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 2, "aligned_file": false, "input_1": {"values": [{"id": 1, "src": "dce"}]}, "paired_options": {"__current_case__": 1, "paired_options_selector": "no"}, "type": "paired_collection", "unaligned_file": false}
              own_file __identifier__
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 10, "src": "dce"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats true
      • Step 28: toolshed.g2.bx.psu.edu/repos/iuc/concoct_cut_up_fasta/concoct_cut_up_fasta/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' 'input.fa' &&  cut_up_fasta.py 'input.fa' --chunk_size 10000 --overlap_size 0 --merge_last --bedfile '/tmp/tmpbo76svqj/job_working_directory/000/13/outputs/dataset_97c6b6c4-2bd3-4a23-8b1c-94f97d599d0e.dat' > '/tmp/tmpbo76svqj/job_working_directory/000/13/outputs/dataset_979c6b30-d621-4027-b77a-e875cc7da112.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              bedfile true
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              chunk_size "10000"
              dbkey "?"
              input_fasta __identifier__
              merge_last true
              overlap_size "0"
      • Step 29: toolshed.g2.bx.psu.edu/repos/devteam/samtools_sort/samtools_sort/2.0.5:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&   addmemory=${GALAXY_MEMORY_MB_PER_SLOT:-768} && ((addmemory=addmemory*75/100)) &&  samtools sort -@ $addthreads -m $addmemory"M"   -O bam -T "${TMPDIR:-.}" '/tmp/tmpbo76svqj/files/9/e/d/dataset_9ed09d4e-2d74-46b0-88c7-c9d362d8c801.dat' > '/tmp/tmpbo76svqj/job_working_directory/000/14/outputs/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input1 __identifier__
              minhash false
              prim_key_cond {"__current_case__": 0, "prim_key_select": ""}
      • Step 30: toolshed.g2.bx.psu.edu/repos/iuc/semibin/semibin/2.0.2+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat' '50contig_reads.bam' &&   ln -s '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' 'contigs.fasta' &&  SemiBin2 single_easy_bin --environment 'global' --input-fasta 'contigs.fasta' --input-bam *.bam --output 'output' --cannot-name 'cannot'   --orf-finder 'fast-naive' --random-seed 0  --epoches 20 --batch-size 2048 --max-node 1.0 --max-edges 200 --minfasta-kbs 200 --compression none --threads ${GALAXY_SLOTS:-1} --processes ${GALAXY_SLOTS:-1} && echo "output" && ls output

            Exit Code:

            • 0

            Standard Error:

            • 2025-04-08 14:07:04 7565e4633be5 SemiBin[10] INFO Binning for short_read
              2025-04-08 14:07:09 7565e4633be5 SemiBin[10] INFO Did not detect GPU, using CPU.
              2025-04-08 14:07:09 7565e4633be5 SemiBin[10] INFO Generating training data...
              2025-04-08 14:07:10 7565e4633be5 SemiBin[10] INFO Calculating coverage for every sample.
              2025-04-08 14:07:10 7565e4633be5 SemiBin[10] INFO Processed: 50contig_reads.bam
              2025-04-08 14:07:11 7565e4633be5 SemiBin[10] INFO Start binning.
              2025-04-08 14:07:13 7565e4633be5 SemiBin[10] INFO Number of bins prior to reclustering: 1
              2025-04-08 14:07:13 7565e4633be5 SemiBin[10] INFO Running naive ORF finder
              2025-04-08 14:07:14 7565e4633be5 SemiBin[10] INFO Number of bins after reclustering: 1
              2025-04-08 14:07:14 7565e4633be5 SemiBin[10] INFO Binning finished
              

            Standard Output:

            • If you find SemiBin useful, please cite:
                      Pan, S.; Zhu, C.; Zhao, XM.; Coelho, LP. A deep siamese neural network improves metagenome-assembled genomes in microbiome datasets across different environments. Nat Commun 13, 2326 (2022). https://doi.org/10.1038/s41467-022-29843-y
              
                      Pan, S.; Zhao, XM; Coelho, LP. SemiBin2: self-supervised contrastive learning leads to better MAGs for short- and long-read sequencing. Bioinformatics Volume 39, Issue Supplement_1, June 2023, Pages i21–i29. https://doi.org/10.1093/bioinformatics/btad209
              
              
              output
              50contig_reads.bam_0_data_cov.csv
              SemiBinRun.log
              contig_bins.tsv
              data.csv
              data_split.csv
              output_bins
              recluster_bins_info.tsv
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              annot {"ml_threshold": null}
              bin {"max_edges": "200", "max_node": "1.0", "minfasta_kbs": "200"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              extra_output ["data", "coverage"]
              min_len {"__current_case__": 0, "method": "automatic"}
              mode {"__current_case__": 0, "environment": "global", "input_bam": {"values": [{"id": 20, "src": "dce"}]}, "input_fasta": {"values": [{"id": 10, "src": "dce"}]}, "ref": {"__current_case__": 2, "select": "ml"}, "select": "single"}
              orf_finder "fast-naive"
              random_seed "0"
              training {"batch_size": "2048", "epoches": "20"}
      • Step 4: Read length (CONCOCT):

        • step_state: scheduled
      • Step 31: toolshed.g2.bx.psu.edu/repos/iuc/concoct_coverage_table/concoct_coverage_table/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir 'mapping' && ln -s '/tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat' 'mapping/_tmp_tmpbo76svqj_files_7_a_8_dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat.sorted.bam' && samtools index 'mapping/_tmp_tmpbo76svqj_files_7_a_8_dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat.sorted.bam' 'mapping/_tmp_tmpbo76svqj_files_7_a_8_dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat.bam.bai' && mv 'mapping/_tmp_tmpbo76svqj_files_7_a_8_dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat.sorted.bam' 'mapping/_tmp_tmpbo76svqj_files_7_a_8_dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat.bam' && concoct_coverage_table.py '/tmp/tmpbo76svqj/files/9/7/c/dataset_97c6b6c4-2bd3-4a23-8b1c-94f97d599d0e.dat' mapping/*.bam > '/tmp/tmpbo76svqj/job_working_directory/000/16/outputs/dataset_2dd8d97a-0d51-4828-a323-c51ea729610b.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              bedfile __identifier__
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bamfile": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual"}
              mode bamfile
      • Step 32: toolshed.g2.bx.psu.edu/repos/iuc/metabat2_jgi_summarize_bam_contig_depths/metabat2_jgi_summarize_bam_contig_depths/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • jgi_summarize_bam_contig_depths --outputDepth '/tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat' --percentIdentity 97   --minMapQual 0 --weightMapQual 0.0  --maxEdgeBases 75 --shredLength 16000 --shredDepth 5 --minContigLength 1 --minContigDepth 0.0 '/tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat'

            Exit Code:

            • 0

            Standard Error:

            • Output depth matrix to /tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat
              Minimum percent identity for a mapped read: 0.97
              minMapQual: 0
              weightMapQual: 0
              Edge bases will be included up to 75 bases
              shredLength: 16000
              shredDepth: 5
              minContigLength: 1
              minContigDepth: 0
              jgi_summarize_bam_contig_depths 2.17 (Bioconda) 2024-12-15T06:34:17
              Running with 4 threads to save memory you can reduce the number of threads with the OMP_NUM_THREADS variable
              Output matrix to /tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat
              Opening all bam files and validating headers
              Processing bam files with largest_contig=0
              Thread 0 opening and reading the header for file: /tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat
              Thread 0 opened the file: /tmp/tmpbo76svqj/files/7/a/8/dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat
              Thread 0 processing bam 0: dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat
              Thread 0 finished reading bam 0: dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat
              Thread 0 finished: dataset_7a8425b7-a209-4210-8bb2-d58c3044d20e.dat with 18924 reads and 8473 readsWellMapped (44.7738%)
              Creating depth matrix file: /tmp/tmpbo76svqj/job_working_directory/000/17/outputs/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat
              Closing last bam file
              Finished
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"includeEdgeBases": false, "maxEdgeBases": "75", "minMapQual": "0", "noIntraDepthVariance": false, "output_paired_contigs": false, "percentIdentity": "97", "showDepth": false, "weightMapQual": "0.0"}
              bam_indiv_input __identifier__
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bam_indiv_input": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual", "use_reference_cond": {"__current_case__": 0, "use_reference": "no"}}
              shredding {"minContigDepth": "0.0", "minContigLength": "1", "shredDepth": "5", "shredLength": "16000"}
      • Step 33: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmpbo76svqj/files/0/a/5/dataset_0a5a242d-bdb8-4341-91e1-ce4b559b8e4c.dat' 'inputs/SemiBin_0.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmpbo76svqj/job_working_directory/000/25/outputs/dataset_8908e207-800e-45aa-9429-b8e285786422.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 21, "src": "dce"}]}
      • Step 34: toolshed.g2.bx.psu.edu/repos/iuc/concoct/concoct/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpbo76svqj/files/9/7/9/dataset_979c6b30-d621-4027-b77a-e875cc7da112.dat' 'composition_file.fa' &&  mkdir outdir && concoct --coverage_file '/tmp/tmpbo76svqj/files/2/d/d/dataset_2dd8d97a-0d51-4828-a323-c51ea729610b.dat' --composition_file 'composition_file.fa' --clusters 400 --kmer_length 4 --threads ${GALAXY_SLOTS:-4} --length_threshold 1000 --read_length 100 --total_percentage_pca 90 --basename 'outdir/' --seed 1 --iterations 500   --no_original_data

            Exit Code:

            • 0

            Standard Error:

            • WARNING:root:CONCOCT is running in single threaded mode. Please, consider adjusting the --threads parameter.
              Up and running. Check /tmp/tmpbo76svqj/job_working_directory/000/18/working/outdir/log.txt for progress
              Setting 1 OMP threads
              Generate input data
              0,-32769.159419,695.861881
              1,-23702.908064,9066.251354
              2,-10301.271601,13401.636463
              3,-9099.561931,1201.709670
              4,-8491.920398,607.641533
              5,-8491.793887,0.126512
              6,-8491.793444,0.000443
              7,-8491.793370,0.000074
              

            Standard Output:

            • 59 2 1
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"clusters": "400", "iterations": "500", "kmer_length": "4", "length_threshold": "1000", "no_cov_normalization": false, "read_length": "100", "seed": "1", "total_percentage_pca": "90"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"converge_out": false, "log": false, "no_total_coverage": false}
      • Step 35: toolshed.g2.bx.psu.edu/repos/mbernt/maxbin2/maxbin2/2.2.7+galaxy6:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo '/tmp/tmpbo76svqj/files/3/b/a/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat' >> abund_list &&   run_MaxBin.pl -contig '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' -out out -abund_list abund_list -min_contig_length 1000 -max_iteration 50 -prob_threshold 0.5 -plotmarker -markerset 107 -thread ${GALAXY_SLOTS:-1}  && gzip -cd out.marker_of_each_bin.tar.gz | tar -xf -

            Exit Code:

            • 0

            Standard Error:

            • Attaching package: ‘gplots’
              
              The following object is masked from ‘package:stats’:
              
                  lowess
              
              

            Standard Output:

            • MaxBin 2.2.7
              Input contig: /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat
              out header: out
              Min contig length: 1000
              Max iteration: 50
              Probability threshold: 0.5
              Thread: 1
              Located abundance file [/tmp/tmpbo76svqj/files/3/b/a/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat]
              Searching against 107 marker genes to find starting seed contigs for [/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat]...
              Running FragGeneScan....
              Running HMMER hmmsearch....
              Try harder to dig out marker genes from contigs.
              Done data collection. Running MaxBin...
              Command: /usr/local/opt/MaxBin-2.2.7/src/MaxBin -fasta out.contig.tmp  -abund out.contig.tmp.abund1 -seed out.seed -out out -min_contig_length 1000 -max_run 50 -prob_threshold 0.5 
              Minimum contig length set to 1000.
              Reading seed list...
              Looking for seeds in sequences.
              	k141_52 [11001.000000]
              	k141_59 [9465.000000]
              Get 2 seeds.
              
              Start EM process.
              Iteration 1
              Iteration 2
              Iteration 3
              Iteration 4
              Iteration 5
              Iteration 6
              Iteration 7
              Iteration 8
              Iteration 9
              Iteration 10
              Iteration 11
              Iteration 12
              Iteration 13
              
              EM finishes successfully.
              
              Classifying sequences based on the EM result.
              Minimum probability for binning: 0.50
              Ignoring 0 bins without any sequences.
              Number of unclassified sequences: 0 (0.00%)
              Elapsed time:  0 days 00:00:00
              
              Rscript /usr/local/opt/MaxBin-2.2.7/heatmap.r out.marker out.marker.pdf
              null device 
                        1 
              out.001.marker.fasta
              out.002.marker.fasta
              Deleting intermediate files.
              
              
              ========== Job finished ==========
              Yielded 2 bins for contig (scaffold) file /tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat
              
              Here are the output files for this run.
              Please refer to the README file for further details.
              
              Summary file: out.summary
              Marker counts: out.marker
              Marker genes for each bin: out.marker_of_each_gene.tar.gz
              Bin files: out.001.fasta - out.002.fasta
              Unbinned sequences: out.noclass
              Marker plot: out.marker.pdf
              
              
              ========== Elapsed Time ==========
              0 hours 0 minutes and 2 seconds.
              
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "expression.json"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              adv {"max_iteration": "50", "min_contig_length": "1000", "prob_threshold": "0.5"}
              assembly {"__current_case__": 0, "inputs": {"__current_case__": 1, "abund": {"values": [{"id": 27, "src": "dce"}]}, "type": "abund"}, "type": "individual"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"log": true, "marker": true, "markers": true, "markerset": "107", "plotmarker": true}
      • Step 36: toolshed.g2.bx.psu.edu/repos/iuc/metabat2/metabat2/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir bins && metabat2 --inFile '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' --outFile 'bins/bin' --abdFile '/tmp/tmpbo76svqj/files/3/b/a/dataset_3baebf7c-d0ea-4486-a616-e20b7ce0cefb.dat' --minContig 1500 --maxP 95 --minS 60 --maxEdges 200 --pTNF 0  --minCV 1.0 --minCVSum 1.0 --seed 0 --minClsSize 200000 --numThreads ${GALAXY_SLOTS:-4}  --unbinned > process_log.txt && mv process_log.txt '/tmp/tmpbo76svqj/job_working_directory/000/20/outputs/dataset_c21a993b-6caf-4739-8f13-162efbf15d43.dat'

            Exit Code:

            • 0

            Standard Error:

            • [Warning!] Negative coverage depth is not allowed for the contig k141_0, column 1: -4.30218e+08
              [Warning!] Negative coverage depth is not allowed for the contig k141_52, column 1: -2.7651e+08
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"base_coverage_depth_cond": {"__current_case__": 1, "abdFile": {"values": [{"id": 27, "src": "dce"}]}, "base_coverage_depth": "yes", "cvExt": null}, "maxEdges": "200", "maxP": "95", "minCV": "1.0", "minCVSum": "1.0", "minContig": "1500", "minS": "60", "noAdd": false, "pTNF": "0", "seed": "0"}
              advanced abdFile
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inFile __identifier__
              out {"extra_outputs": ["lowDepth", "tooShort", "unbinned", "log"], "minClsSize": "200000", "onlyLabel": false, "saveCls": false}
      • Step 37: toolshed.g2.bx.psu.edu/repos/iuc/concoct_merge_cut_up_clustering/concoct_merge_cut_up_clustering/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • merge_cutup_clustering.py '/tmp/tmpbo76svqj/files/3/7/0/dataset_37014003-9e06-4c4c-bbb2-c319f632d067.dat' > '/tmp/tmpbo76svqj/job_working_directory/000/21/outputs/dataset_32a65316-5433-41d8-8529-5ad6070b9cf6.dat'

            Exit Code:

            • 0

            Standard Error:

            • /usr/local/bin/merge_cutup_clustering.py:17: SyntaxWarning: invalid escape sequence '\.'
                CONTIG_PART_EXPR = re.compile("(.*)\.concoct_part_([0-9]*)")
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cutup_clustering_result __identifier__
              dbkey "?"
      • Step 38: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmpbo76svqj/files/2/c/b/dataset_2cb70f16-77f8-4346-94de-ff620500e606.dat' 'inputs/001.fasta' && ln -s '/tmp/tmpbo76svqj/files/6/4/1/dataset_641e8ede-7647-4ced-a70a-e79a93714e7c.dat' 'inputs/002.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmpbo76svqj/job_working_directory/000/26/outputs/dataset_cdf66de2-f37b-4ba4-bc38-e4ad7e7282f4.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 31, "src": "dce"}]}
      • Step 39: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmpbo76svqj/files/6/e/7/dataset_6e7933bb-f1d2-4ae2-a497-da6acfc701c2.dat' 'inputs/1.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmpbo76svqj/job_working_directory/000/23/outputs/dataset_89b52c48-d763-47ea-a97b-c7c864e8b314.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 39, "src": "dce"}]}
      • Step 40: toolshed.g2.bx.psu.edu/repos/iuc/concoct_extract_fasta_bins/concoct_extract_fasta_bins/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' 'contigs.fa' &&  mkdir outdir && extract_fasta_bins.py 'contigs.fa' '/tmp/tmpbo76svqj/files/3/2/a/dataset_32a65316-5433-41d8-8529-5ad6070b9cf6.dat' --output_path 'outdir'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cluster_file __identifier__
              dbkey "?"
              fasta_file __identifier__
      • Step 5: Environment for the built-in model (SemiBin):

        • step_state: scheduled
      • Step 41: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmpbo76svqj/files/8/f/6/dataset_8f6a7fb0-d2bd-4118-ad48-6b35386b1f70.dat' 'inputs/0.fasta' && ln -s '/tmp/tmpbo76svqj/files/8/b/2/dataset_8b22b944-4cc7-4eb3-a189-dd9fa1a45b55.dat' 'inputs/1.fasta' && ln -s '/tmp/tmpbo76svqj/files/3/7/8/dataset_378a00a2-628e-4f8e-8442-caa4ce82c2c1.dat' 'inputs/10.fasta' && ln -s '/tmp/tmpbo76svqj/files/a/9/d/dataset_a9dd4c2e-fa32-4d19-99ec-3c52594cf2aa.dat' 'inputs/11.fasta' && ln -s '/tmp/tmpbo76svqj/files/e/8/2/dataset_e82c4453-829e-4325-a891-55795f5af147.dat' 'inputs/2.fasta' && ln -s '/tmp/tmpbo76svqj/files/5/9/e/dataset_59e89dc2-14d5-4040-9baa-de0cbf4cb4eb.dat' 'inputs/3.fasta' && ln -s '/tmp/tmpbo76svqj/files/5/b/b/dataset_5bb85b3a-15e5-4fbb-b799-8d67fc97afb2.dat' 'inputs/4.fasta' && ln -s '/tmp/tmpbo76svqj/files/6/a/c/dataset_6acaaa55-a18b-40d6-80b4-0eaa570d0ec5.dat' 'inputs/5.fasta' && ln -s '/tmp/tmpbo76svqj/files/1/5/3/dataset_1539f41f-3b55-4f77-9662-c513f66713cd.dat' 'inputs/6.fasta' && ln -s '/tmp/tmpbo76svqj/files/3/0/7/dataset_30717f78-30ec-4649-b604-a0e7b8b8060f.dat' 'inputs/7.fasta' && ln -s '/tmp/tmpbo76svqj/files/f/5/9/dataset_f59a1415-436a-4755-b760-2681aa006850.dat' 'inputs/8.fasta' && ln -s '/tmp/tmpbo76svqj/files/5/9/e/dataset_59ecc137-a237-45a6-bef7-5f1b5b5836fc.dat' 'inputs/9.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmpbo76svqj/job_working_directory/000/24/outputs/dataset_7c8742dd-7601-4127-9fc0-13da4a0cc8a1.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 45, "src": "dce"}]}
      • Step 42: __BUILD_LIST__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              datasets [{"__index__": 0, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 57, "src": "hda"}]}}, {"__index__": 1, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 44, "src": "hda"}]}}, {"__index__": 2, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 64, "src": "hda"}]}}, {"__index__": 3, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 59, "src": "hda"}]}}]
      • Step 43: toolshed.g2.bx.psu.edu/repos/iuc/binette/binette/1.0.5+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir -p 'input' 'output' &&  ln -s '/tmp/tmpbo76svqj/files/7/c/8/dataset_7c8742dd-7601-4127-9fc0-13da4a0cc8a1.dat' 'input/bin_table_0.tsv' && ln -s '/tmp/tmpbo76svqj/files/8/9/b/dataset_89b52c48-d763-47ea-a97b-c7c864e8b314.dat' 'input/bin_table_1.tsv' && ln -s '/tmp/tmpbo76svqj/files/c/d/f/dataset_cdf66de2-f37b-4ba4-bc38-e4ad7e7282f4.dat' 'input/bin_table_2.tsv' && ln -s '/tmp/tmpbo76svqj/files/8/9/0/dataset_8908e207-800e-45aa-9429-b8e285786422.dat' 'input/bin_table_3.tsv' &&  ln -s '/tmp/tmpbo76svqj/files/1/8/b/dataset_18b41470-af6e-41fc-8ca4-51e94f07d355.dat' 'input_contigs.fasta' &&   binette -b input/*.tsv -c 'input_contigs.fasta' -m 1 -t "${GALAXY_SLOTS:-1}" -o 'output/' -w 2 --checkm2_db '/cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd'

            Exit Code:

            • 0

            Standard Error:

            •   0%|          | 0/59 [00:00<?, ?it/s]
              100%|██████████| 59/59 [00:00<00:00, 4888.47it/s]
              
                0%|          | 0/59 [00:00<?, ?contig/s]
              100%|██████████| 59/59 [00:00<00:00, 409708.50contig/s]
              
                0%|          | 0/59 [00:00<?, ?contig/s]
              100%|██████████| 59/59 [00:00<00:00, 672456.35contig/s]
              
                0%|          | 0/16 [00:00<?, ?bin/s]
              100%|██████████| 16/16 [00:05<00:00,  2.85bin/s]
              100%|██████████| 16/16 [00:05<00:00,  2.85bin/s]
              
                0%|          | 0/17 [00:00<?, ?bin/s]
              100%|██████████| 17/17 [00:05<00:00,  3.36bin/s]
              100%|██████████| 17/17 [00:05<00:00,  3.36bin/s]
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              contamination_weight "2"
              database_type {"__current_case__": 1, "datamanager": "1.0.2", "is_select": "cached"}
              dbkey "?"
              min_completeness "1"
              proteins None
      • Step 44: Pool Bins from all samples:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              input {"values": [{"id": 47, "src": "hdca"}]}
              join_identifier "_"
      • Step 45: toolshed.g2.bx.psu.edu/repos/iuc/checkm2/checkm2/1.0.2+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir input_dir && ln -s '/tmp/tmpbo76svqj/files/7/1/1/dataset_711ad373-2a0f-4824-b1f6-8b836d8b0ac6.dat' 'input_dir/50contig_reads_bin_1.dat' && ln -s '/tmp/tmpbo76svqj/files/7/c/0/dataset_7c09ebaa-2fde-4b3e-ad76-72b4c7650533.dat' 'input_dir/50contig_reads_bin_11.dat' && ln -s '/tmp/tmpbo76svqj/files/1/d/2/dataset_1d21d281-6b34-4bf0-9e49-9f773092df80.dat' 'input_dir/50contig_reads_bin_6.dat' && ln -s '/tmp/tmpbo76svqj/files/1/8/3/dataset_1834363a-74a8-4b8d-b14c-a03efeb1450e.dat' 'input_dir/50contig_reads_bin_60.dat' && checkm2 predict --input input_dir   -x .dat --threads "${GALAXY_SLOTS:-1}" --database_path '/cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd' --output-directory output

            Exit Code:

            • 0

            Standard Error:

            • [04/08/2025 02:56:58 PM] INFO: Running CheckM2 version 1.0.2
              [04/08/2025 02:56:58 PM] INFO: Custom database path provided for predict run. Checking database at /cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd...
              [04/08/2025 02:57:04 PM] INFO: Running quality prediction workflow with 1 threads.
              [04/08/2025 02:57:05 PM] INFO: Calling genes in 4 bins with 1 threads:
              [04/08/2025 02:57:07 PM] INFO: Calculating metadata for 4 bins with 1 threads:
              [04/08/2025 02:57:07 PM] INFO: Annotating input genomes with DIAMOND using 1 threads
              [04/08/2025 02:59:54 PM] INFO: Processing DIAMOND output
              [04/08/2025 02:59:54 PM] INFO: Predicting completeness and contamination using ML models.
              [04/08/2025 02:59:59 PM] INFO: Parsing all results and constructing final output table.
              [04/08/2025 02:59:59 PM] INFO: CheckM2 finished successfully.
              

            Standard Output:

            •     Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
              
                  Finished processing 1 of 4 (25.00%) bin metadata.
                  Finished processing 2 of 4 (50.00%) bin metadata.
                  Finished processing 3 of 4 (75.00%) bin metadata.
                  Finished processing 4 of 4 (100.00%) bin metadata.
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              database "1.0.2"
              dbkey "?"
              genes false
              model ""
              ttable None
      • Step 46: toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_awk_tool/9.5+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • env -i $(which awk) --sandbox -v FS='	' -v OFS='	' --re-interval -f '/tmp/tmpbo76svqj/job_working_directory/000/31/configs/tmp_4g2jo17' '/tmp/tmpbo76svqj/files/9/0/1/dataset_901f4efc-723b-4081-b4fc-f6974072f359.dat' > '/tmp/tmpbo76svqj/job_working_directory/000/31/outputs/dataset_2426e53f-7267-40ed-8e27-b7c149bb51d9.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              code "BEGIN {OFS=\"\\t\"; print \"genome\\tcompleteness\\tcontamination\"} \nNR > 1 {\n if ($1 !~ /\\.fasta$/) \n $1 = $1 \".fasta\"\n print $1, $2, $3\n}"
              dbkey "?"
      • Step 47: toolshed.g2.bx.psu.edu/repos/iuc/drep_dereplicate/drep_dereplicate/3.5.0+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpbo76svqj/files/7/1/1/dataset_711ad373-2a0f-4824-b1f6-8b836d8b0ac6.dat' '50contig_reads_bin_1.fasta' &&  ln -s '/tmp/tmpbo76svqj/files/7/c/0/dataset_7c09ebaa-2fde-4b3e-ad76-72b4c7650533.dat' '50contig_reads_bin_11.fasta' &&  ln -s '/tmp/tmpbo76svqj/files/1/d/2/dataset_1d21d281-6b34-4bf0-9e49-9f773092df80.dat' '50contig_reads_bin_6.fasta' &&  ln -s '/tmp/tmpbo76svqj/files/1/8/3/dataset_1834363a-74a8-4b8d-b14c-a03efeb1450e.dat' '50contig_reads_bin_60.fasta' &&   dRep dereplicate outdir  -g '50contig_reads_bin_1.fasta' '50contig_reads_bin_11.fasta' '50contig_reads_bin_6.fasta' '50contig_reads_bin_60.fasta'   --length 100 --completeness 1 --contamination 25   --genomeInfo '/tmp/tmpbo76svqj/files/8/1/0/dataset_8109dd45-b2d2-4dcc-8b4a-abf7a892c6fb.dat'    --MASH_sketch '1000' --P_ani 0.9  --primary_chunksize 5000   --S_algorithm 'ANImf'  --n_PRESET 'normal'   --coverage_method 'larger'  --S_ani 0.95 --cov_thresh 0.1  --clusterAlg 'average'    --completeness_weight 1.0 --contamination_weight 5.0 --strain_heterogeneity_weight 1.0 --N50_weight 0.5 --size_weight 0.0 --centrality_weight 1.0    --warn_dist 0.25 --warn_sim 0.98 --warn_aln 0.25  --debug || (rc=$?; ls -ltr `find outdir -type f`; cat outdir/data/checkM/checkM_outdir/checkm.log; cat outdir/log/logger.log; exit $rc)

            Exit Code:

            • 0

            Standard Error:

            • ***************************************************
                  ..:: dRep dereplicate Step 1. Filter ::..
              ***************************************************
                  
              Will filter the genome list
              4 genomes were input to dRep
              Calculating genome info of genomes
              100.00% of genomes passed length filtering
              100.00% of genomes passed checkM filtering
              ***************************************************
                  ..:: dRep dereplicate Step 2. Cluster ::..
              ***************************************************
                  
              Running primary clustering
              Running pair-wise MASH clustering
              4 primary clusters made
              Running secondary clustering
              Running 4 ANImf comparisons- should take ~ 0.3 min
              Step 4. Return output
              ***************************************************
                  ..:: dRep dereplicate Step 3. Choose ::..
              ***************************************************
                  
              Loading work directory
              ***************************************************
                  ..:: dRep dereplicate Step 4. Evaluate ::..
              ***************************************************
                  
              will produce Widb (winner information db)
              Winner database saved to /tmp/tmpbo76svqj/job_working_directory/000/33/working/outdirdata_tables/Widb.csv
              ***************************************************
                  ..:: dRep dereplicate Step 5. Analyze ::..
              ***************************************************
                  
              making plots 1, 2, 3, 4, 5, 6
              Plotting primary dendrogram
              Plotting secondary dendrograms
              Plotting MDS plot
              Plotting scatterplots
              Plotting bin scorring plot
              Plotting winning genomes plot...
              
              $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
              
                  ..:: dRep dereplicate finished ::..
              
              Dereplicated genomes................. /tmp/tmpbo76svqj/job_working_directory/000/33/working/outdir/dereplicated_genomes/
              Dereplicated genomes information..... /tmp/tmpbo76svqj/job_working_directory/000/33/working/outdir/data_tables/Widb.csv
              Figures.............................. /tmp/tmpbo76svqj/job_working_directory/000/33/working/outdir/figures/
              Warnings............................. /tmp/tmpbo76svqj/job_working_directory/000/33/working/outdir/log/warnings.txt
              
              $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
                  
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              comp_clust {"clusterAlg": "average", "run_tertiary_clustering": false, "steps": {"MASH_sketch": "1000", "P_ani": "0.9", "S_ani": "0.95", "__current_case__": 0, "clustering": {"S_algorithm": "ANImf", "__current_case__": 1, "coverage_method": "larger", "n_PRESET": "normal"}, "cov_thresh": "0.1", "multiround_primary_clustering": false, "primary_chunksize": "5000", "select": "default"}}
              dbkey "?"
              filter {"completeness": "1", "contamination": "25", "length": "100"}
              quality {"__current_case__": 1, "genomeInfo": {"values": [{"id": 84, "src": "hda"}]}, "source": "genomeInfo"}
              scoring {"N50_weight": "0.5", "centrality_weight": "1.0", "completeness_weight": "1.0", "contamination_weight": "5.0", "extra_weight_table": null, "size_weight": "0.0", "strain_heterogeneity_weight": "1.0"}
              select_outputs ["log", "warnings", "Primary_clustering_dendrogram", "Clustering_scatterplots", "Widb", "Chdb"]
              warning {"warn_aln": "0.25", "warn_dist": "0.25", "warn_sim": "0.98"}
      • Step 48: toolshed.g2.bx.psu.edu/repos/iuc/gtdbtk_classify_wf/gtdbtk_classify_wf/2.4.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is error

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"force": false, "min_af": "0.65", "min_perc_aa": "10", "output_process_log": false}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              gtdbtk_db "full_database_release_220_downloaded_2024-10-28"
      • Step 49: toolshed.g2.bx.psu.edu/repos/iuc/checkm2/checkm2/1.0.2+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is running

            Command Line:

            • mkdir input_dir && ln -s '/tmp/tmpbo76svqj/files/6/3/2/dataset_63279e7d-d10f-4461-95a5-ea0dbb452657.dat' 'input_dir/50contig_reads_bin_1.fasta.dat' && ln -s '/tmp/tmpbo76svqj/files/c/7/6/dataset_c7676744-cf64-46e2-821f-54435af89a67.dat' 'input_dir/50contig_reads_bin_11.fasta.dat' && ln -s '/tmp/tmpbo76svqj/files/2/6/7/dataset_267984a9-fce0-408b-a392-19d53e11442a.dat' 'input_dir/50contig_reads_bin_6.fasta.dat' && ln -s '/tmp/tmpbo76svqj/files/d/9/c/dataset_d9c21b93-428b-4b57-9a9f-35d636ac3c4e.dat' 'input_dir/50contig_reads_bin_60.fasta.dat' && checkm2 predict --input input_dir   -x .dat --threads "${GALAXY_SLOTS:-1}" --database_path '/cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd' --output-directory output

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              database "1.0.2"
              dbkey "?"
              genes false
              model ""
              ttable None
      • Step 50: toolshed.g2.bx.psu.edu/repos/iuc/checkm_lineage_wf/checkm_lineage_wf/1.2.3+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is queued

            Command Line:

            • mkdir 'bins' && ln -s '/tmp/tmpbo76svqj/files/6/3/2/dataset_63279e7d-d10f-4461-95a5-ea0dbb452657.dat' 'bins/50contig_reads_bin_1.fasta.fasta' && ln -s '/tmp/tmpbo76svqj/files/c/7/6/dataset_c7676744-cf64-46e2-821f-54435af89a67.dat' 'bins/50contig_reads_bin_11.fasta.fasta' && ln -s '/tmp/tmpbo76svqj/files/2/6/7/dataset_267984a9-fce0-408b-a392-19d53e11442a.dat' 'bins/50contig_reads_bin_6.fasta.fasta' && ln -s '/tmp/tmpbo76svqj/files/d/9/c/dataset_d9c21b93-428b-4b57-9a9f-35d636ac3c4e.dat' 'bins/50contig_reads_bin_60.fasta.fasta' &&   checkm lineage_wf 'bins' 'output'     --unique '10' --multi '10'      --aai_strain 0.9  --e_value 1e-10 --length 0.7 --file '/tmp/tmpbo76svqj/job_working_directory/000/36/outputs/dataset_02370dc3-42c6-43f1-abc9-c2303db37641.dat' --tab_table --extension 'fasta' --threads ${GALAXY_SLOTS:-1} --pplacer_threads ${GALAXY_SLOTS:-1}

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              bins {"__current_case__": 0, "bins_coll": {"values": [{"id": 53, "src": "hdca"}]}, "select": "collection"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              extra_outputs None
              lineage_set {"force_domain": false, "multi": "10", "no_refinement": false, "unique": "10"}
              qa {"aai_strain": "0.9", "e_value": "1e-10", "ignore_thresholds": false, "individual_markers": false, "length": "0.7", "skip_adj_correction": false, "skip_pseudogene_correction": false}
              tree_analyze {"ali": false, "genes": false, "nt": false, "reduced_tree": false}
      • Step 6: Trimmed grouped paired reads:

        • step_state: scheduled
      • Step 51: toolshed.g2.bx.psu.edu/repos/iuc/coverm_genome/coverm_genome/0.7.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is queued

            Command Line:

            • mkdir 'single/' && mkdir 'fw/' && mkdir 'rv/' && mkdir 'interl/' && mkdir 'ref/' && mkdir 'bam/' &&   ln -s '/tmp/tmpbo76svqj/files/1/5/0/dataset_150653ab-6cb3-42fd-92b7-e56e727f1f0b.dat' 'fw/50contig_reads' && ln -s '/tmp/tmpbo76svqj/files/7/a/0/dataset_7a0b5be5-a56f-4607-a22a-57074fac905d.dat' 'rv/50contig_reads' &&   echo "GENOME_FOR_READS mapped.mode.genome.genomic.source=history" && echo "GENOME_FOR_READS mapped.mode.genome.genomic.genome_fasta_files=/tmp/tmpbo76svqj/files/6/3/2/dataset_63279e7d-d10f-4461-95a5-ea0dbb452657.dat,/tmp/tmpbo76svqj/files/c/7/6/dataset_c7676744-cf64-46e2-821f-54435af89a67.dat,/tmp/tmpbo76svqj/files/2/6/7/dataset_267984a9-fce0-408b-a392-19d53e11442a.dat,/tmp/tmpbo76svqj/files/d/9/c/dataset_d9c21b93-428b-4b57-9a9f-35d636ac3c4e.dat" && ln -s '/tmp/tmpbo76svqj/files/6/3/2/dataset_63279e7d-d10f-4461-95a5-ea0dbb452657.dat' '50contig_reads_bin_1.fasta' && ln -s '/tmp/tmpbo76svqj/files/c/7/6/dataset_c7676744-cf64-46e2-821f-54435af89a67.dat' '50contig_reads_bin_11.fasta' && ln -s '/tmp/tmpbo76svqj/files/2/6/7/dataset_267984a9-fce0-408b-a392-19d53e11442a.dat' '50contig_reads_bin_6.fasta' && ln -s '/tmp/tmpbo76svqj/files/d/9/c/dataset_d9c21b93-428b-4b57-9a9f-35d636ac3c4e.dat' '50contig_reads_bin_60.fasta' &&   mkdir 'representative-fasta/' && coverm genome -1 'fw/50contig_reads' -2 'rv/50contig_reads'  --mapper 'minimap2-sr' --genome-fasta-files '50contig_reads_bin_1.fasta' '50contig_reads_bin_11.fasta' '50contig_reads_bin_6.fasta' '50contig_reads_bin_60.fasta'   --min-read-aligned-length 0 --min-read-percent-identity 0.0 --min-read-aligned-percent 0.0   --methods 'relative_abundance' --min-covered-fraction 10 --contig-end-exclusion 75 --trim-min 5 --trim-max 95    --output-format 'dense' --output-file '/tmp/tmpbo76svqj/job_working_directory/000/37/outputs/dataset_48dde814-4d2f-44b6-9954-33a452dc47cd.dat'  --threads ${GALAXY_SLOTS:-1}

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              alignment {"exclude_supplementary": false, "min_read_aligned_length": "0", "min_read_aligned_percent": "0.0", "min_read_percent_identity": "0.0", "proper_pairs_only": {"__current_case__": 1, "proper_pairs_only": ""}}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cov {"contig_end_exclusion": "75", "methods": ["relative_abundance"], "min_covered_fraction": "10", "trim_max": "95", "trim_min": "5"}
              dbkey "?"
              derep {"checkm_tab_table": null, "dereplicate": {"__current_case__": 1, "dereplicate": ""}, "genome_info": null, "max_contamination": null, "min_completeness": null}
              exclude_genomes_from_deshard false
              mapped {"__current_case__": 1, "mapped": "not-mapped", "mapper": "minimap2-sr", "mode": {"__current_case__": 0, "genome": {"__current_case__": 1, "genomic": {"__current_case__": 0, "genome_fasta_files": {"values": [{"id": 53, "src": "hdca"}]}, "source": "history"}, "ref_or_genome": "genomic"}, "mode": "individual", "read_type": {"__current_case__": 2, "paired_reads": {"values": [{"id": 4, "src": "dce"}]}, "type": "paired_collection"}}}
              out {"dereplication_output_cluster_definition": false, "dereplication_output_representative_fasta_directory_copy": false, "no_zeros": false, "output_format": "dense"}
      • Step 52: toolshed.g2.bx.psu.edu/repos/iuc/quast/quast/5.3.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is running

            Command Line:

            • echo 50contig_reads_bin_1_fasta &&    metaquast  --labels '50contig_reads_bin_1_fasta' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmpbo76svqj/files/6/3/2/dataset_63279e7d-d10f-4461-95a5-ea0dbb452657.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmpbo76svqj/job_working_directory/000/38/outputs/dataset_455f6b7c-0828-47ec-8721-e241bb99c410_files' && cp outputdir/combined_reference/*.html '/tmp/tmpbo76svqj/job_working_directory/000/38/outputs/dataset_455f6b7c-0828-47ec-8721-e241bb99c410_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmpbo76svqj/job_working_directory/000/38/outputs/dataset_455f6b7c-0828-47ec-8721-e241bb99c410_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 93, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
          • Job 2:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 94, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
          • Job 3:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 95, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
          • Job 4:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 96, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
      • Step 53: toolshed.g2.bx.psu.edu/repos/iuc/bakta/bakta/1.9.4+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 93, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
          • Job 2:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 94, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
          • Job 3:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 95, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
          • Job 4:

            • Job state is queued

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 96, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
      • Step 54: toolshed.g2.bx.psu.edu/repos/iuc/collection_column_join/collection_column_join/0.0.3:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is new

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "639edb8a148211f0aeac6045bda70f2c"
              chromInfo "/tmp/tmpbo76svqj/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              fill_char "."
              has_header "1"
              identifier_column "1"
              include_outputs None
              old_col_in_header true
      • Step 55: Unlabelled step:

        • step_state: new
      • Step 7: Trimmed sample paired reads:

        • step_state: scheduled
      • Step 8: Contamination weight (Binette):

        • step_state: scheduled
      • Step 9: CheckM2 Database for Binette:

        • step_state: scheduled
      • Step 10: Minimum MAG completeness percentage:

        • step_state: scheduled
    • Other invocation details
      • error_message

        • Final state of invocation 1ed2e242fb805563 is [failed]. Failed to run workflow, at least one job is in [error] state.
      • history_id

        • 1ed2e242fb805563
      • history_state

        • error
      • invocation_id

        • 1ed2e242fb805563
      • invocation_state

        • failed
      • messages

        • [{'dependent_workflow_step_id': 47, 'hdca_id': 'f2b79f2e7fefe971', 'reason': 'collection_failed', 'workflow_step_id': 54}]
      • workflow_id

        • 1ed2e242fb805563

Copy link

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 0
Failure 1
Skipped 0
Failed Tests
  • ❌ MAGs-generation.ga_0

    Problems:

    • Output with path /tmp/tmpfh4xfkb2/Quast on data 12, data 11, and data 20 HTML report for combined reference genome__a81ec982-15db-4182-89f1-1c9dd6c0dfdd different than expected
      Expected file size of 363000+-5000 found 372079
      

    Workflow invocation details

    • Invocation Messages

    • Steps
      • Step 1: Choose Assembler:

        • step_state: scheduled
      • Step 2: Custom Assemblies:

        • step_state: scheduled
      • Step 11: Maximum MAG contamination percentage:

        • step_state: scheduled
      • Step 12: Minimum MAG length:

        • step_state: scheduled
      • Step 13: CheckM2 Database (dRep step):

        • step_state: scheduled
      • Step 14: ANI threshold for dereplication:

        • step_state: scheduled
      • Step 15: CheckM2 Database (Bin step):

        • step_state: scheduled
      • Step 16: GTDB-tk Database:

        • step_state: scheduled
      • Step 17: Run GTDB-Tk on MAGs:

        • step_state: scheduled
      • Step 18: Bakta Database:

        • step_state: scheduled
      • Step 19: AMRFinderPlus Database for Bakta:

        • step_state: scheduled
      • Step 20: Run Bakta on MAGs:

        • step_state: scheduled
      • Step 3: Minimum length of contigs to output:

        • step_state: scheduled
      • Step 21: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "MEGAHIT", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 22: toolshed.g2.bx.psu.edu/repos/iuc/map_param_value/map_param_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_param_type {"__current_case__": 0, "input_param": "MEGAHIT", "mappings": [{"__index__": 0, "from": "metaSPAdes", "to": "True"}], "type": "text"}
              output_param_type "boolean"
              unmapped {"__current_case__": 2, "default_value": "False", "on_unmapped": "default"}
      • Step 23: __UNZIP_COLLECTION__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              input {"values": [{"id": 1, "src": "dce"}]}
      • Step 24: toolshed.g2.bx.psu.edu/repos/iuc/megahit/megahit/1.2.9+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • if [[ -n "$GALAXY_MEMORY_MB" ]]; then MEMORY="-m $((GALAXY_MEMORY_MB * 1024))"; fi;  megahit --num-cpu-threads ${GALAXY_SLOTS:-4}  -1 '/tmp/tmprt3848zx/files/2/8/2/dataset_28228683-09f8-4a16-a632-dc47b6787386.dat' -2 '/tmp/tmprt3848zx/files/3/a/1/dataset_3a16e83b-8f5d-4f72-9525-9611cef44b4e.dat' --min-count '2' --k-list '21,29,39,59,79,99,119,141'  --bubble-level '2' --merge-level '20,0.95' --prune-level '2' --prune-depth '2' --disconnect-ratio '0.1' --low-local-ratio '0.2' --cleaning-rounds '5'   --min-contig-len '200' $MEMORY

            Exit Code:

            • 0

            Standard Error:

            • 2025-04-10 08:34:46 - MEGAHIT v1.2.9
              2025-04-10 08:34:46 - Using megahit_core with POPCNT and BMI2 support
              2025-04-10 08:34:46 - Convert reads to binary library
              2025-04-10 08:34:46 - b'INFO  sequence/io/sequence_lib.cpp  :   75 - Lib 0 (/tmp/tmprt3848zx/files/2/8/2/dataset_28228683-09f8-4a16-a632-dc47b6787386.dat,/tmp/tmprt3848zx/files/3/a/1/dataset_3a16e83b-8f5d-4f72-9525-9611cef44b4e.dat): pe, 18924 reads, 150 max length'
              2025-04-10 08:34:46 - b'INFO  utils/utils.h                 :  152 - Real: 0.0554\tuser: 0.0492\tsys: 0.0069\tmaxrss: 20844'
              2025-04-10 08:34:46 - Start assembly. Number of CPU threads 1 
              2025-04-10 08:34:46 - k list: 21,29,39,59,79,99,119,141 
              2025-04-10 08:34:46 - Memory used: 15090090393
              2025-04-10 08:34:46 - Extract solid (k+1)-mers for k = 21 
              2025-04-10 08:34:47 - Build graph for k = 21 
              2025-04-10 08:34:47 - Assemble contigs from SdBG for k = 21
              2025-04-10 08:34:47 - Local assembly for k = 21
              2025-04-10 08:34:48 - Extract iterative edges from k = 21 to 29 
              2025-04-10 08:34:48 - Build graph for k = 29 
              2025-04-10 08:34:49 - Assemble contigs from SdBG for k = 29
              2025-04-10 08:34:49 - Local assembly for k = 29
              2025-04-10 08:34:50 - Extract iterative edges from k = 29 to 39 
              2025-04-10 08:34:50 - Build graph for k = 39 
              2025-04-10 08:34:50 - Assemble contigs from SdBG for k = 39
              2025-04-10 08:34:51 - Local assembly for k = 39
              2025-04-10 08:34:51 - Extract iterative edges from k = 39 to 59 
              2025-04-10 08:34:51 - Build graph for k = 59 
              2025-04-10 08:34:52 - Assemble contigs from SdBG for k = 59
              2025-04-10 08:34:52 - Local assembly for k = 59
              2025-04-10 08:34:52 - Extract iterative edges from k = 59 to 79 
              2025-04-10 08:34:52 - Build graph for k = 79 
              2025-04-10 08:34:53 - Assemble contigs from SdBG for k = 79
              2025-04-10 08:34:53 - Local assembly for k = 79
              2025-04-10 08:34:53 - Extract iterative edges from k = 79 to 99 
              2025-04-10 08:34:53 - Build graph for k = 99 
              2025-04-10 08:34:54 - Assemble contigs from SdBG for k = 99
              2025-04-10 08:34:54 - Local assembly for k = 99
              2025-04-10 08:34:54 - Extract iterative edges from k = 99 to 119 
              2025-04-10 08:34:54 - Build graph for k = 119 
              2025-04-10 08:34:54 - Assemble contigs from SdBG for k = 119
              2025-04-10 08:34:55 - Local assembly for k = 119
              2025-04-10 08:34:55 - Extract iterative edges from k = 119 to 141 
              2025-04-10 08:34:55 - Build graph for k = 141 
              2025-04-10 08:34:55 - Assemble contigs from SdBG for k = 141
              2025-04-10 08:34:55 - Merging to output final contigs 
              2025-04-10 08:34:55 - 63 contigs, total 266124 bp, min 354 bp, max 11001 bp, avg 4224 bp, N50 4877 bp
              2025-04-10 08:34:55 - ALL DONE. Time elapsed: 9.692088 seconds 
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "fastqsanger.gz"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced_section {"bubble_level": "2", "cleaning_rounds": "5", "disconnect_ratio": "0.1", "kmin1pass": false, "low_local_ratio": "0.2", "merge_level": "20,0.95", "nolocal": false, "nomercy": false, "prune_depth": "2", "prune_level": "2"}
              basic_section {"k_mer": {"__current_case__": 0, "k_list": "21,29,39,59,79,99,119,141", "k_mer_method": "klist_method"}, "min_count": "2"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"__current_case__": 3, "batchmode": {"__current_case__": 0, "pair_input": {"values": [{"id": 1, "src": "dce"}]}, "processmode": "individual"}, "choice": "paired_collection"}
              output_section {"log_file": false, "min_contig_len": "200", "show_intermediate_contigs": false}
      • Step 25: toolshed.g2.bx.psu.edu/repos/nml/metaspades/metaspades/4.1.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              additional_reads {"__current_case__": 1, "selector": "false"}
              arf {"nanopore": null, "pacbio": null}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              kmer_cond {"__current_case__": 0, "kmer_sel": "auto"}
              library_number "true"
              mode_sel None
              optional_output ["ag", "ags", "cn", "cs"]
              phred_offset "auto"
              singlePaired {"__current_case__": 1, "input": {"values": [{"id": 1, "src": "hdca"}]}, "orientation": "fr", "sPaired": "paired_collection", "type_paired": "pe"}
      • Step 26: toolshed.g2.bx.psu.edu/repos/iuc/pick_value/pick_value/0.2.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              style_cond {"__current_case__": 0, "pick_style": "first", "type_cond": {"__current_case__": 4, "param_type": "data", "pick_from": [{"__index__": 0, "value": {"values": [{"id": 12, "src": "hda"}]}}, {"__index__": 1, "value": {"values": [{"id": 9, "src": "dce"}]}}, {"__index__": 2, "value": null}]}}
      • Step 27: toolshed.g2.bx.psu.edu/repos/iuc/quast/quast/5.3.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo 50contig_reads &&   ln -s '/tmp/tmprt3848zx/files/2/8/2/dataset_28228683-09f8-4a16-a632-dc47b6787386.dat' 'pe1-50contig_reads.fastqsanger.gz' && ln -s '/tmp/tmprt3848zx/files/3/a/1/dataset_3a16e83b-8f5d-4f72-9525-9611cef44b4e.dat' 'pe2-50contig_reads.fastqsanger.gz' &&  metaquast  --pe1 'pe1-50contig_reads.fastqsanger.gz' --pe2 'pe2-50contig_reads.fastqsanger.gz' --labels '50contig_reads' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/11/outputs/dataset_a81ec982-15db-4182-89f1-1c9dd6c0dfdd_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/11/outputs/dataset_a81ec982-15db-4182-89f1-1c9dd6c0dfdd_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/11/outputs/dataset_a81ec982-15db-4182-89f1-1c9dd6c0dfdd_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads
              /usr/local/opt/quast-5.3.0/metaquast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --labels 50contig_reads -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 08:35:59
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat ==> 50contig_reads
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --pe1 pe1-50contig_reads.fastqsanger.gz --pe2 pe2-50contig_reads.fastqsanger.gz --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat -o /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir --labels 50contig_reads
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 08:36:00
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmprt3848zx/job_working_directory/000/11/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat ==> 50contig_reads
              
              2025-04-10 08:36:00
              Running Reads analyzer...
              NOTICE: Permission denied accessing /usr/local/lib/python3.12/site-packages/quast_libs/gridss. GRIDSS will be downloaded to home directory /tmp/tmprt3848zx/job_working_directory/000/11/home/.quast
              Downloading gridss (file: gridss-1.4.1.jar)...
               0.0% of 38935087 bytes
               1.0% of 38935087 bytes
               2.0% of 38935087 bytes
               3.0% of 38935087 bytes
               4.0% of 38935087 bytes
               5.0% of 38935087 bytes
               6.0% of 38935087 bytes
               7.0% of 38935087 bytes
               8.0% of 38935087 bytes
               9.0% of 38935087 bytes
               10.0% of 38935087 bytes
               11.0% of 38935087 bytes
               12.0% of 38935087 bytes
               13.0% of 38935087 bytes
               14.0% of 38935087 bytes
               15.0% of 38935087 bytes
               16.0% of 38935087 bytes
               17.0% of 38935087 bytes
               18.0% of 38935087 bytes
               19.0% of 38935087 bytes
               20.0% of 38935087 bytes
               21.0% of 38935087 bytes
               22.0% of 38935087 bytes
               23.0% of 38935087 bytes
               24.0% of 38935087 bytes
               25.0% of 38935087 bytes
               26.0% of 38935087 bytes
               27.0% of 38935087 bytes
               28.0% of 38935087 bytes
               29.0% of 38935087 bytes
               30.0% of 38935087 bytes
               31.0% of 38935087 bytes
               32.0% of 38935087 bytes
               33.0% of 38935087 bytes
               34.0% of 38935087 bytes
               35.0% of 38935087 bytes
               36.0% of 38935087 bytes
               37.0% of 38935087 bytes
               38.0% of 38935087 bytes
               39.0% of 38935087 bytes
               40.0% of 38935087 bytes
               41.0% of 38935087 bytes
               42.0% of 38935087 bytes
               43.0% of 38935087 bytes
               44.0% of 38935087 bytes
               45.0% of 38935087 bytes
               46.0% of 38935087 bytes
               47.0% of 38935087 bytes
               48.0% of 38935087 bytes
               49.0% of 38935087 bytes
               50.0% of 38935087 bytes
               51.0% of 38935087 bytes
               52.0% of 38935087 bytes
               53.0% of 38935087 bytes
               54.0% of 38935087 bytes
               55.0% of 38935087 bytes
               56.0% of 38935087 bytes
               57.0% of 38935087 bytes
               58.0% of 38935087 bytes
               59.0% of 38935087 bytes
               60.0% of 38935087 bytes
               61.0% of 38935087 bytes
               62.0% of 38935087 bytes
               63.0% of 38935087 bytes
               64.0% of 38935087 bytes
               65.0% of 38935087 bytes
               66.0% of 38935087 bytes
               67.0% of 38935087 bytes
               68.0% of 38935087 bytes
               69.0% of 38935087 bytes
               70.0% of 38935087 bytes
               71.0% of 38935087 bytes
               72.0% of 38935087 bytes
               73.0% of 38935087 bytes
               74.0% of 38935087 bytes
               75.0% of 38935087 bytes
               76.0% of 38935087 bytes
               77.0% of 38935087 bytes
               78.0% of 38935087 bytes
               79.0% of 38935087 bytes
               80.0% of 38935087 bytes
               81.0% of 38935087 bytes
               82.0% of 38935087 bytes
               83.0% of 38935087 bytes
               84.0% of 38935087 bytes
               85.0% of 38935087 bytes
               86.0% of 38935087 bytes
               87.0% of 38935087 bytes
               88.0% of 38935087 bytes
               88.0% of 38935087 bytes
               89.0% of 38935087 bytes
               90.0% of 38935087 bytes
               91.0% of 38935087 bytes
               92.0% of 38935087 bytes
               93.0% of 38935087 bytes
               94.0% of 38935087 bytes
               95.0% of 38935087 bytes
               96.0% of 38935087 bytes
               97.0% of 38935087 bytes
               98.0% of 38935087 bytes
               99.0% of 38935087 bytes
              gridss successfully downloaded!
                Logging to files /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.log and /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/reads_stats/reads_stats.err...
                Pre-processing reads...
                Running BWA...
                Done.
                Sorting SAM-file...
                Analysis is finished.
                Creating total report...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/reads_stats/reads_report.txt, reads_report.tsv, and reads_report.tex
              Done.
              
              2025-04-10 08:36:05
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads
                Calculating N50 and L50...
                  50contig_reads, N50 = 4877, L50 = 22, auN = 5187.8, Total length = 265405, GC % = 36.45, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/basic_stats/50contig_reads_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-04-10 08:36:06
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-04-10 08:36:07
              RESULTS:
                Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/icarus.html
                Log is saved to /tmp/tmprt3848zx/job_working_directory/000/11/working/outputdir/quast.log
              
              Finished: 2025-04-10 08:36:07
              Elapsed time: 0:00:06.967949
              NOTICEs: 3; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 10, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 2, "input_1": {"values": [{"id": 7, "src": "dce"}]}, "input_2": {"values": [{"id": 8, "src": "dce"}]}, "reads_option": "paired"}}
              output_files ["html", "pdf", "tabular", "log", "summary", "krona"]
              split_scaffolds false
      • Step 28: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.3+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' genome && ln -s -f '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' genome.fa &&   ln -f -s '/tmp/tmprt3848zx/files/2/8/2/dataset_28228683-09f8-4a16-a632-dc47b6787386.dat' input_f.fastq.gz &&  ln -f -s '/tmp/tmprt3848zx/files/3/a/1/dataset_3a16e83b-8f5d-4f72-9525-9611cef44b4e.dat' input_r.fastq.gz &&    THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -1 'input_f.fastq.gz' -2 'input_r.fastq.gz'                2> >(tee '/tmp/tmprt3848zx/job_working_directory/000/12/outputs/dataset_5c850e40-c06a-42aa-91a7-ceb370e774f3.dat' >&2)  | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmprt3848zx/job_working_directory/000/12/outputs/dataset_c93f3612-c51c-408f-84c5-2d80d96ace49.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              9462 reads; of these:
                9462 (100.00%) were paired; of these:
                  90 (0.95%) aligned concordantly 0 times
                  9300 (98.29%) aligned concordantly exactly 1 time
                  72 (0.76%) aligned concordantly >1 times
                  ----
                  90 pairs aligned concordantly 0 times; of these:
                    8 (8.89%) aligned discordantly 1 time
                  ----
                  82 pairs aligned 0 times concordantly or discordantly; of these:
                    164 mates make up the pairs; of these:
                      93 (56.71%) aligned 0 times
                      70 (42.68%) aligned exactly 1 time
                      1 (0.61%) aligned >1 times
              99.51% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 6; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 38016.9 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 18323 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 18324 for bucket 1
              Getting block 2 of 7
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 49606 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49607 for bucket 2
              Getting block 3 of 7
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 45151 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 45152 for bucket 3
              Getting block 4 of 7
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 49787 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49788 for bucket 4
              Getting block 5 of 7
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 28638 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 28639 for bucket 5
              Getting block 6 of 7
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 43194 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 43195 for bucket 6
              Getting block 7 of 7
                Reserving size (49899) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 31419 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 31420 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 66531
              Using parameters --bmax 49899 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 49899 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 44353.2 (target: 49898)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 6
                Reserving size (49899) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 47687 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47688 for bucket 1
              Getting block 2 of 6
                Reserving size (49899) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 36636 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 36637 for bucket 2
              Getting block 3 of 6
                Reserving size (49899) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 49027 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 49028 for bucket 3
              Getting block 4 of 6
                Reserving size (49899) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 37449 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 37450 for bucket 4
              Getting block 5 of 6
                Reserving size (49899) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 47142 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 47143 for bucket 5
              Getting block 6 of 6
                Reserving size (49899) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 48178 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 48179 for bucket 6
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 84325
              fchr[G]: 133305
              fchr[T]: 181307
              fchr[$]: 266124
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4286544 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 66536 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 266124
                  bwtLen: 266125
                  sz: 66531
                  bwtSz: 66532
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 16633
                  offsSz: 66532
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 1387
                  numLines: 1387
                  ebwtTotLen: 88768
                  ebwtTotSz: 88768
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:00
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              analysis_type {"__current_case__": 0, "analysis_type_selector": "simple", "presets": "no_presets"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 2, "aligned_file": false, "input_1": {"values": [{"id": 1, "src": "dce"}]}, "paired_options": {"__current_case__": 1, "paired_options_selector": "no"}, "type": "paired_collection", "unaligned_file": false}
              own_file __identifier__
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 10, "src": "dce"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats true
      • Step 29: toolshed.g2.bx.psu.edu/repos/iuc/concoct_cut_up_fasta/concoct_cut_up_fasta/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' 'input.fa' &&  cut_up_fasta.py 'input.fa' --chunk_size 10000 --overlap_size 0 --merge_last --bedfile '/tmp/tmprt3848zx/job_working_directory/000/13/outputs/dataset_37f8de3e-f68b-444a-9f04-f8aac7ff4955.dat' > '/tmp/tmprt3848zx/job_working_directory/000/13/outputs/dataset_427437bc-34ba-411c-8993-eeae950b0e46.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              bedfile true
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              chunk_size "10000"
              dbkey "?"
              input_fasta __identifier__
              merge_last true
              overlap_size "0"
      • Step 30: toolshed.g2.bx.psu.edu/repos/devteam/samtools_sort/samtools_sort/2.0.5:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&   addmemory=${GALAXY_MEMORY_MB_PER_SLOT:-768} && ((addmemory=addmemory*75/100)) &&  samtools sort -@ $addthreads -m $addmemory"M"   -O bam -T "${TMPDIR:-.}" '/tmp/tmprt3848zx/files/c/9/3/dataset_c93f3612-c51c-408f-84c5-2d80d96ace49.dat' > '/tmp/tmprt3848zx/job_working_directory/000/14/outputs/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input1 __identifier__
              minhash false
              prim_key_cond {"__current_case__": 0, "prim_key_select": ""}
      • Step 4: Read length (CONCOCT):

        • step_state: scheduled
      • Step 31: toolshed.g2.bx.psu.edu/repos/iuc/semibin/semibin/2.0.2+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat' '50contig_reads.bam' &&   ln -s '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' 'contigs.fasta' &&  SemiBin2 single_easy_bin --environment 'global' --input-fasta 'contigs.fasta' --input-bam *.bam --output 'output' --cannot-name 'cannot'   --orf-finder 'fast-naive' --random-seed 0  --epoches 20 --batch-size 2048 --max-node 1.0 --max-edges 200 --minfasta-kbs 200 --compression none --threads ${GALAXY_SLOTS:-1} --processes ${GALAXY_SLOTS:-1} && echo "output" && ls output

            Exit Code:

            • 0

            Standard Error:

            • 2025-04-10 08:36:39 dd5582054a32 SemiBin[9] INFO Binning for short_read
              2025-04-10 08:36:44 dd5582054a32 SemiBin[9] INFO Did not detect GPU, using CPU.
              2025-04-10 08:36:44 dd5582054a32 SemiBin[9] INFO Generating training data...
              2025-04-10 08:36:45 dd5582054a32 SemiBin[9] INFO Calculating coverage for every sample.
              2025-04-10 08:36:45 dd5582054a32 SemiBin[9] INFO Processed: 50contig_reads.bam
              2025-04-10 08:36:45 dd5582054a32 SemiBin[9] INFO Start binning.
              2025-04-10 08:36:47 dd5582054a32 SemiBin[9] INFO Number of bins prior to reclustering: 1
              2025-04-10 08:36:47 dd5582054a32 SemiBin[9] INFO Running naive ORF finder
              2025-04-10 08:36:48 dd5582054a32 SemiBin[9] INFO Number of bins after reclustering: 1
              2025-04-10 08:36:48 dd5582054a32 SemiBin[9] INFO Binning finished
              

            Standard Output:

            • If you find SemiBin useful, please cite:
                      Pan, S.; Zhu, C.; Zhao, XM.; Coelho, LP. A deep siamese neural network improves metagenome-assembled genomes in microbiome datasets across different environments. Nat Commun 13, 2326 (2022). https://doi.org/10.1038/s41467-022-29843-y
              
                      Pan, S.; Zhao, XM; Coelho, LP. SemiBin2: self-supervised contrastive learning leads to better MAGs for short- and long-read sequencing. Bioinformatics Volume 39, Issue Supplement_1, June 2023, Pages i21–i29. https://doi.org/10.1093/bioinformatics/btad209
              
              
              output
              50contig_reads.bam_0_data_cov.csv
              SemiBinRun.log
              contig_bins.tsv
              data.csv
              data_split.csv
              output_bins
              recluster_bins_info.tsv
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              annot {"ml_threshold": null}
              bin {"max_edges": "200", "max_node": "1.0", "minfasta_kbs": "200"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              extra_output ["data", "coverage"]
              min_len {"__current_case__": 0, "method": "automatic"}
              mode {"__current_case__": 0, "environment": "global", "input_bam": {"values": [{"id": 20, "src": "dce"}]}, "input_fasta": {"values": [{"id": 10, "src": "dce"}]}, "ref": {"__current_case__": 2, "select": "ml"}, "select": "single"}
              orf_finder "fast-naive"
              random_seed "0"
              training {"batch_size": "2048", "epoches": "20"}
      • Step 32: toolshed.g2.bx.psu.edu/repos/iuc/concoct_coverage_table/concoct_coverage_table/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir 'mapping' && ln -s '/tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat' 'mapping/_tmp_tmprt3848zx_files_1_f_a_dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat.sorted.bam' && samtools index 'mapping/_tmp_tmprt3848zx_files_1_f_a_dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat.sorted.bam' 'mapping/_tmp_tmprt3848zx_files_1_f_a_dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat.bam.bai' && mv 'mapping/_tmp_tmprt3848zx_files_1_f_a_dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat.sorted.bam' 'mapping/_tmp_tmprt3848zx_files_1_f_a_dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat.bam' && concoct_coverage_table.py '/tmp/tmprt3848zx/files/3/7/f/dataset_37f8de3e-f68b-444a-9f04-f8aac7ff4955.dat' mapping/*.bam > '/tmp/tmprt3848zx/job_working_directory/000/16/outputs/dataset_f877af85-c3a7-4d9c-88d4-79864a47077b.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              bedfile __identifier__
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bamfile": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual"}
              mode bamfile
      • Step 33: toolshed.g2.bx.psu.edu/repos/iuc/metabat2_jgi_summarize_bam_contig_depths/metabat2_jgi_summarize_bam_contig_depths/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • jgi_summarize_bam_contig_depths --outputDepth '/tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat' --percentIdentity 97   --minMapQual 0 --weightMapQual 0.0  --maxEdgeBases 75 --shredLength 16000 --shredDepth 5 --minContigLength 1 --minContigDepth 0.0 '/tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat'

            Exit Code:

            • 0

            Standard Error:

            • Output depth matrix to /tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat
              Minimum percent identity for a mapped read: 0.97
              minMapQual: 0
              weightMapQual: 0
              Edge bases will be included up to 75 bases
              shredLength: 16000
              shredDepth: 5
              minContigLength: 1
              minContigDepth: 0
              jgi_summarize_bam_contig_depths 2.17 (Bioconda) 2024-12-15T06:34:17
              Running with 4 threads to save memory you can reduce the number of threads with the OMP_NUM_THREADS variable
              Output matrix to /tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat
              Opening all bam files and validating headers
              Processing bam files with largest_contig=0
              Thread 0 opening and reading the header for file: /tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat
              Thread 0 opened the file: /tmp/tmprt3848zx/files/1/f/a/dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat
              Thread 0 processing bam 0: dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat
              Thread 0 finished reading bam 0: dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat
              Thread 0 finished: dataset_1fae1fad-6c83-48b9-aa58-8d63340b3f94.dat with 18924 reads and 8473 readsWellMapped (44.7738%)
              Creating depth matrix file: /tmp/tmprt3848zx/job_working_directory/000/17/outputs/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat
              Closing last bam file
              Finished
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"includeEdgeBases": false, "maxEdgeBases": "75", "minMapQual": "0", "noIntraDepthVariance": false, "output_paired_contigs": false, "percentIdentity": "97", "showDepth": false, "weightMapQual": "0.0"}
              bam_indiv_input __identifier__
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 0, "bam_indiv_input": {"values": [{"id": 20, "src": "dce"}]}, "type": "individual", "use_reference_cond": {"__current_case__": 0, "use_reference": "no"}}
              shredding {"minContigDepth": "0.0", "minContigLength": "1", "shredDepth": "5", "shredLength": "16000"}
      • Step 34: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmprt3848zx/files/2/4/a/dataset_24ae73b9-a6e2-4215-ba32-22c1d46c6a5c.dat' 'inputs/SemiBin_0.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmprt3848zx/job_working_directory/000/25/outputs/dataset_33341154-822d-4123-8d83-f89e1bf8fb3a.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 21, "src": "dce"}]}
      • Step 35: toolshed.g2.bx.psu.edu/repos/iuc/concoct/concoct/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmprt3848zx/files/4/2/7/dataset_427437bc-34ba-411c-8993-eeae950b0e46.dat' 'composition_file.fa' &&  mkdir outdir && concoct --coverage_file '/tmp/tmprt3848zx/files/f/8/7/dataset_f877af85-c3a7-4d9c-88d4-79864a47077b.dat' --composition_file 'composition_file.fa' --clusters 400 --kmer_length 4 --threads ${GALAXY_SLOTS:-4} --length_threshold 1000 --read_length 100 --total_percentage_pca 90 --basename 'outdir/' --seed 1 --iterations 500   --no_original_data

            Exit Code:

            • 0

            Standard Error:

            • WARNING:root:CONCOCT is running in single threaded mode. Please, consider adjusting the --threads parameter.
              Up and running. Check /tmp/tmprt3848zx/job_working_directory/000/18/working/outdir/log.txt for progress
              Setting 1 OMP threads
              Generate input data
              0,-32769.159419,695.861881
              1,-23702.908064,9066.251354
              2,-10301.271601,13401.636463
              3,-9099.561931,1201.709670
              4,-8491.920398,607.641533
              5,-8491.793887,0.126512
              6,-8491.793444,0.000443
              7,-8491.793370,0.000074
              

            Standard Output:

            • 59 2 1
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"clusters": "400", "iterations": "500", "kmer_length": "4", "length_threshold": "1000", "no_cov_normalization": false, "read_length": "100", "seed": "1", "total_percentage_pca": "90"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"converge_out": false, "log": false, "no_total_coverage": false}
      • Step 36: toolshed.g2.bx.psu.edu/repos/mbernt/maxbin2/maxbin2/2.2.7+galaxy6:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo '/tmp/tmprt3848zx/files/3/1/9/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat' >> abund_list &&   run_MaxBin.pl -contig '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' -out out -abund_list abund_list -min_contig_length 1000 -max_iteration 50 -prob_threshold 0.5 -plotmarker -markerset 107 -thread ${GALAXY_SLOTS:-1}  && gzip -cd out.marker_of_each_bin.tar.gz | tar -xf -

            Exit Code:

            • 0

            Standard Error:

            • Attaching package: ‘gplots’
              
              The following object is masked from ‘package:stats’:
              
                  lowess
              
              

            Standard Output:

            • MaxBin 2.2.7
              Input contig: /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat
              out header: out
              Min contig length: 1000
              Max iteration: 50
              Probability threshold: 0.5
              Thread: 1
              Located abundance file [/tmp/tmprt3848zx/files/3/1/9/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat]
              Searching against 107 marker genes to find starting seed contigs for [/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat]...
              Running FragGeneScan....
              Running HMMER hmmsearch....
              Try harder to dig out marker genes from contigs.
              Done data collection. Running MaxBin...
              Command: /usr/local/opt/MaxBin-2.2.7/src/MaxBin -fasta out.contig.tmp  -abund out.contig.tmp.abund1 -seed out.seed -out out -min_contig_length 1000 -max_run 50 -prob_threshold 0.5 
              Minimum contig length set to 1000.
              Reading seed list...
              Looking for seeds in sequences.
              	k141_52 [11001.000000]
              	k141_59 [9465.000000]
              Get 2 seeds.
              
              Start EM process.
              Iteration 1
              Iteration 2
              Iteration 3
              Iteration 4
              Iteration 5
              Iteration 6
              Iteration 7
              Iteration 8
              Iteration 9
              Iteration 10
              Iteration 11
              Iteration 12
              Iteration 13
              
              EM finishes successfully.
              
              Classifying sequences based on the EM result.
              Minimum probability for binning: 0.50
              Ignoring 0 bins without any sequences.
              Number of unclassified sequences: 0 (0.00%)
              Elapsed time:  0 days 00:00:00
              
              Rscript /usr/local/opt/MaxBin-2.2.7/heatmap.r out.marker out.marker.pdf
              null device 
                        1 
              out.001.marker.fasta
              out.002.marker.fasta
              Deleting intermediate files.
              
              
              ========== Job finished ==========
              Yielded 2 bins for contig (scaffold) file /tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat
              
              Here are the output files for this run.
              Please refer to the README file for further details.
              
              Summary file: out.summary
              Marker counts: out.marker
              Marker genes for each bin: out.marker_of_each_gene.tar.gz
              Bin files: out.001.fasta - out.002.fasta
              Unbinned sequences: out.noclass
              Marker plot: out.marker.pdf
              
              
              ========== Elapsed Time ==========
              0 hours 0 minutes and 2 seconds.
              
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "expression.json"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              adv {"max_iteration": "50", "min_contig_length": "1000", "prob_threshold": "0.5"}
              assembly {"__current_case__": 0, "inputs": {"__current_case__": 1, "abund": {"values": [{"id": 27, "src": "dce"}]}, "type": "abund"}, "type": "individual"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              output {"log": true, "marker": true, "markers": true, "markerset": "107", "plotmarker": true}
      • Step 37: toolshed.g2.bx.psu.edu/repos/iuc/metabat2/metabat2/2.17+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir bins && metabat2 --inFile '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' --outFile 'bins/bin' --abdFile '/tmp/tmprt3848zx/files/3/1/9/dataset_3198f2d6-2daa-413f-bf6a-3a1554beb9b3.dat' --minContig 1500 --maxP 95 --minS 60 --maxEdges 200 --pTNF 0  --minCV 1.0 --minCVSum 1.0 --seed 0 --minClsSize 200000 --numThreads ${GALAXY_SLOTS:-4}  --unbinned > process_log.txt && mv process_log.txt '/tmp/tmprt3848zx/job_working_directory/000/20/outputs/dataset_ede1a3a6-a682-4d2c-a77a-07e30a56298a.dat'

            Exit Code:

            • 0

            Standard Error:

            • [Warning!] Negative coverage depth is not allowed for the contig k141_0, column 1: -4.29775e+08
              [Warning!] Negative coverage depth is not allowed for the contig k141_52, column 1: -2.76243e+08
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"base_coverage_depth_cond": {"__current_case__": 1, "abdFile": {"values": [{"id": 27, "src": "dce"}]}, "base_coverage_depth": "yes", "cvExt": null}, "maxEdges": "200", "maxP": "95", "minCV": "1.0", "minCVSum": "1.0", "minContig": "1500", "minS": "60", "noAdd": false, "pTNF": "0", "seed": "0"}
              advanced abdFile
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inFile __identifier__
              out {"extra_outputs": ["lowDepth", "tooShort", "unbinned", "log"], "minClsSize": "200000", "onlyLabel": false, "saveCls": false}
      • Step 38: toolshed.g2.bx.psu.edu/repos/iuc/concoct_merge_cut_up_clustering/concoct_merge_cut_up_clustering/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • merge_cutup_clustering.py '/tmp/tmprt3848zx/files/1/b/7/dataset_1b7ea032-d4d3-4b32-a09c-a9aa6ebe12b6.dat' > '/tmp/tmprt3848zx/job_working_directory/000/21/outputs/dataset_677e6137-b807-4266-8f7b-2aa00c797357.dat'

            Exit Code:

            • 0

            Standard Error:

            • /usr/local/bin/merge_cutup_clustering.py:17: SyntaxWarning: invalid escape sequence '\.'
                CONTIG_PART_EXPR = re.compile("(.*)\.concoct_part_([0-9]*)")
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cutup_clustering_result __identifier__
              dbkey "?"
      • Step 39: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmprt3848zx/files/1/6/4/dataset_164be511-eaa9-4351-b89f-feb62eacc7f5.dat' 'inputs/001.fasta' && ln -s '/tmp/tmprt3848zx/files/f/5/4/dataset_f543dbfc-9c42-4276-9405-72ced6811ba8.dat' 'inputs/002.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmprt3848zx/job_working_directory/000/26/outputs/dataset_5da7aadc-0962-455f-bfb4-5908a4e8a0f0.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 31, "src": "dce"}]}
      • Step 40: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmprt3848zx/files/f/8/6/dataset_f86d9845-cf35-4942-ae22-95a6f068a815.dat' 'inputs/1.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmprt3848zx/job_working_directory/000/23/outputs/dataset_302b8401-c545-40d3-aceb-1262496fd528.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 39, "src": "dce"}]}
      • Step 5: Environment for the built-in model (SemiBin):

        • step_state: scheduled
      • Step 41: toolshed.g2.bx.psu.edu/repos/iuc/concoct_extract_fasta_bins/concoct_extract_fasta_bins/1.1.0+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' 'contigs.fa' &&  mkdir outdir && extract_fasta_bins.py 'contigs.fa' '/tmp/tmprt3848zx/files/6/7/7/dataset_677e6137-b807-4266-8f7b-2aa00c797357.dat' --output_path 'outdir'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cluster_file __identifier__
              dbkey "?"
              fasta_file __identifier__
      • Step 42: toolshed.g2.bx.psu.edu/repos/iuc/fasta_to_contig2bin/Fasta_to_Contig2Bin/1.1.7+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir "inputs" && ln -s '/tmp/tmprt3848zx/files/3/1/4/dataset_314bdaf1-b95e-40d0-8ff7-d09cc1ab0793.dat' 'inputs/0.fasta' && ln -s '/tmp/tmprt3848zx/files/2/1/d/dataset_21df73a8-fc29-40e2-8ff4-da12c5d13ea4.dat' 'inputs/1.fasta' && ln -s '/tmp/tmprt3848zx/files/4/c/a/dataset_4caf8293-abf4-4dea-826e-3456cd193f14.dat' 'inputs/10.fasta' && ln -s '/tmp/tmprt3848zx/files/4/9/a/dataset_49a3d93e-a661-4853-ac27-158e5def016b.dat' 'inputs/11.fasta' && ln -s '/tmp/tmprt3848zx/files/c/4/4/dataset_c448867b-4378-414d-a567-c3bc9ce92ec8.dat' 'inputs/2.fasta' && ln -s '/tmp/tmprt3848zx/files/4/4/8/dataset_448f29ba-fac2-4c91-8b77-b0fa7f667fe8.dat' 'inputs/3.fasta' && ln -s '/tmp/tmprt3848zx/files/b/d/e/dataset_bdea65e3-9a7f-41a2-8e67-76c08002b02b.dat' 'inputs/4.fasta' && ln -s '/tmp/tmprt3848zx/files/e/6/c/dataset_e6c2a8f5-e49e-4819-b87a-637d5f3b6601.dat' 'inputs/5.fasta' && ln -s '/tmp/tmprt3848zx/files/b/6/9/dataset_b69606cf-206b-4ca3-999d-d83684ea9167.dat' 'inputs/6.fasta' && ln -s '/tmp/tmprt3848zx/files/f/6/a/dataset_f6aa9197-ba99-469e-997c-e37c974070af.dat' 'inputs/7.fasta' && ln -s '/tmp/tmprt3848zx/files/2/c/e/dataset_2cebedbf-9505-4fc3-83eb-48c44d1d1ad8.dat' 'inputs/8.fasta' && ln -s '/tmp/tmprt3848zx/files/8/5/8/dataset_8582530d-fef6-4058-9948-b1e6dfb59a59.dat' 'inputs/9.fasta' && Fasta_to_Contig2Bin.sh --extension fasta --input_folder 'inputs' > '/tmp/tmprt3848zx/job_working_directory/000/24/outputs/dataset_1a7ce1c7-d94e-4625-a07e-f7b220055362.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              inputs {"values": [{"id": 45, "src": "dce"}]}
      • Step 43: __BUILD_LIST__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              datasets [{"__index__": 0, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 57, "src": "hda"}]}}, {"__index__": 1, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 44, "src": "hda"}]}}, {"__index__": 2, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 64, "src": "hda"}]}}, {"__index__": 3, "id_cond": {"__current_case__": 0, "id_select": "idx"}, "input": {"values": [{"id": 59, "src": "hda"}]}}]
      • Step 44: toolshed.g2.bx.psu.edu/repos/iuc/binette/binette/1.0.5+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir -p 'input' 'output' &&  ln -s '/tmp/tmprt3848zx/files/1/a/7/dataset_1a7ce1c7-d94e-4625-a07e-f7b220055362.dat' 'input/bin_table_0.tsv' && ln -s '/tmp/tmprt3848zx/files/3/0/2/dataset_302b8401-c545-40d3-aceb-1262496fd528.dat' 'input/bin_table_1.tsv' && ln -s '/tmp/tmprt3848zx/files/5/d/a/dataset_5da7aadc-0962-455f-bfb4-5908a4e8a0f0.dat' 'input/bin_table_2.tsv' && ln -s '/tmp/tmprt3848zx/files/3/3/3/dataset_33341154-822d-4123-8d83-f89e1bf8fb3a.dat' 'input/bin_table_3.tsv' &&  ln -s '/tmp/tmprt3848zx/files/f/5/6/dataset_f56caa53-2f68-4615-b2bf-37274711c329.dat' 'input_contigs.fasta' &&   binette -b input/*.tsv -c 'input_contigs.fasta' -m 1 -t "${GALAXY_SLOTS:-1}" -o 'output/' -w 2 --checkm2_db '/cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd'

            Exit Code:

            • 0

            Standard Error:

            •   0%|          | 0/59 [00:00<?, ?it/s]
              100%|██████████| 59/59 [00:00<00:00, 6901.41it/s]
              
                0%|          | 0/59 [00:00<?, ?contig/s]
              100%|██████████| 59/59 [00:00<00:00, 480512.50contig/s]
              
                0%|          | 0/59 [00:00<?, ?contig/s]
              100%|██████████| 59/59 [00:00<00:00, 642763.47contig/s]
              
                0%|          | 0/16 [00:00<?, ?bin/s]
              100%|██████████| 16/16 [00:05<00:00,  2.86bin/s]
              100%|██████████| 16/16 [00:05<00:00,  2.86bin/s]
              
                0%|          | 0/17 [00:00<?, ?bin/s]
              100%|██████████| 17/17 [00:05<00:00,  3.32bin/s]
              100%|██████████| 17/17 [00:05<00:00,  3.32bin/s]
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              contamination_weight "2"
              database_type {"__current_case__": 1, "datamanager": "1.0.2", "is_select": "cached"}
              dbkey "?"
              min_completeness "1"
              proteins None
      • Step 45: Pool Bins from all samples:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              input {"values": [{"id": 47, "src": "hdca"}]}
              join_identifier "_"
      • Step 46: toolshed.g2.bx.psu.edu/repos/iuc/checkm2/checkm2/1.0.2+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir input_dir && ln -s '/tmp/tmprt3848zx/files/4/c/f/dataset_4cfbc079-cb1c-45ca-a7ab-897e176ae1b8.dat' 'input_dir/50contig_reads_bin_1.dat' && ln -s '/tmp/tmprt3848zx/files/6/5/b/dataset_65b0aee2-328e-4732-9548-cf1b70171ccd.dat' 'input_dir/50contig_reads_bin_11.dat' && ln -s '/tmp/tmprt3848zx/files/a/e/d/dataset_aed6f725-4bfb-4b47-b195-8508696a7273.dat' 'input_dir/50contig_reads_bin_55.dat' && ln -s '/tmp/tmprt3848zx/files/3/a/a/dataset_3aa28516-c26d-4559-a20e-4d3fe42881b2.dat' 'input_dir/50contig_reads_bin_6.dat' && checkm2 predict --input input_dir   -x .dat --threads "${GALAXY_SLOTS:-1}" --database_path '/cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd' --output-directory output

            Exit Code:

            • 0

            Standard Error:

            • [04/10/2025 09:08:48 AM] INFO: Running CheckM2 version 1.0.2
              [04/10/2025 09:08:48 AM] INFO: Custom database path provided for predict run. Checking database at /cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd...
              [04/10/2025 09:08:51 AM] INFO: Running quality prediction workflow with 1 threads.
              [04/10/2025 09:08:52 AM] INFO: Calling genes in 4 bins with 1 threads:
              [04/10/2025 09:08:53 AM] INFO: Calculating metadata for 4 bins with 1 threads:
              [04/10/2025 09:08:54 AM] INFO: Annotating input genomes with DIAMOND using 1 threads
              [04/10/2025 09:11:41 AM] INFO: Processing DIAMOND output
              [04/10/2025 09:11:41 AM] INFO: Predicting completeness and contamination using ML models.
              [04/10/2025 09:11:46 AM] INFO: Parsing all results and constructing final output table.
              [04/10/2025 09:11:46 AM] INFO: CheckM2 finished successfully.
              

            Standard Output:

            •     Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
              
                  Finished processing 1 of 4 (25.00%) bin metadata.
                  Finished processing 2 of 4 (50.00%) bin metadata.
                  Finished processing 3 of 4 (75.00%) bin metadata.
                  Finished processing 4 of 4 (100.00%) bin metadata.
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              database "1.0.2"
              dbkey "?"
              genes false
              model ""
              ttable None
      • Step 47: toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_awk_tool/9.5+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • env -i $(which awk) --sandbox -v FS='	' -v OFS='	' --re-interval -f '/tmp/tmprt3848zx/job_working_directory/000/31/configs/tmpnuov_1au' '/tmp/tmprt3848zx/files/a/b/5/dataset_ab51551c-8980-4f68-ab5a-4bcc2c784137.dat' > '/tmp/tmprt3848zx/job_working_directory/000/31/outputs/dataset_1a3ce119-f7e9-43e9-bc99-3ba438a89007.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              code "BEGIN {OFS=\"\\t\"; print \"genome\\tcompleteness\\tcontamination\"} \nNR > 1 {\n if ($1 !~ /\\.fasta$/) \n $1 = $1 \".fasta\"\n print $1, $2, $3\n}"
              dbkey "?"
      • Step 48: toolshed.g2.bx.psu.edu/repos/iuc/drep_dereplicate/drep_dereplicate/3.5.0+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmprt3848zx/files/4/c/f/dataset_4cfbc079-cb1c-45ca-a7ab-897e176ae1b8.dat' '50contig_reads_bin_1.fasta' &&  ln -s '/tmp/tmprt3848zx/files/6/5/b/dataset_65b0aee2-328e-4732-9548-cf1b70171ccd.dat' '50contig_reads_bin_11.fasta' &&  ln -s '/tmp/tmprt3848zx/files/a/e/d/dataset_aed6f725-4bfb-4b47-b195-8508696a7273.dat' '50contig_reads_bin_55.fasta' &&  ln -s '/tmp/tmprt3848zx/files/3/a/a/dataset_3aa28516-c26d-4559-a20e-4d3fe42881b2.dat' '50contig_reads_bin_6.fasta' &&   dRep dereplicate outdir  -g '50contig_reads_bin_1.fasta' '50contig_reads_bin_11.fasta' '50contig_reads_bin_55.fasta' '50contig_reads_bin_6.fasta'   --length 100 --completeness 1 --contamination 25   --genomeInfo '/tmp/tmprt3848zx/files/0/1/b/dataset_01b21884-f9f9-4dcf-a383-bd609688ef98.dat'    --MASH_sketch '1000' --P_ani 0.9  --primary_chunksize 5000   --S_algorithm 'ANImf'  --n_PRESET 'normal'   --coverage_method 'larger'  --S_ani 0.95 --cov_thresh 0.1  --clusterAlg 'average'    --completeness_weight 1.0 --contamination_weight 5.0 --strain_heterogeneity_weight 1.0 --N50_weight 0.5 --size_weight 0.0 --centrality_weight 1.0    --warn_dist 0.25 --warn_sim 0.98 --warn_aln 0.25  --debug || (rc=$?; ls -ltr `find outdir -type f`; cat outdir/data/checkM/checkM_outdir/checkm.log; cat outdir/log/logger.log; exit $rc)

            Exit Code:

            • 0

            Standard Error:

            • ***************************************************
                  ..:: dRep dereplicate Step 1. Filter ::..
              ***************************************************
                  
              Will filter the genome list
              4 genomes were input to dRep
              Calculating genome info of genomes
              100.00% of genomes passed length filtering
              100.00% of genomes passed checkM filtering
              ***************************************************
                  ..:: dRep dereplicate Step 2. Cluster ::..
              ***************************************************
                  
              Running primary clustering
              Running pair-wise MASH clustering
              4 primary clusters made
              Running secondary clustering
              Running 4 ANImf comparisons- should take ~ 0.3 min
              Step 4. Return output
              ***************************************************
                  ..:: dRep dereplicate Step 3. Choose ::..
              ***************************************************
                  
              Loading work directory
              ***************************************************
                  ..:: dRep dereplicate Step 4. Evaluate ::..
              ***************************************************
                  
              will produce Widb (winner information db)
              Winner database saved to /tmp/tmprt3848zx/job_working_directory/000/33/working/outdirdata_tables/Widb.csv
              ***************************************************
                  ..:: dRep dereplicate Step 5. Analyze ::..
              ***************************************************
                  
              making plots 1, 2, 3, 4, 5, 6
              Plotting primary dendrogram
              Plotting secondary dendrograms
              Plotting MDS plot
              Plotting scatterplots
              Plotting bin scorring plot
              Plotting winning genomes plot...
              
              $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
              
                  ..:: dRep dereplicate finished ::..
              
              Dereplicated genomes................. /tmp/tmprt3848zx/job_working_directory/000/33/working/outdir/dereplicated_genomes/
              Dereplicated genomes information..... /tmp/tmprt3848zx/job_working_directory/000/33/working/outdir/data_tables/Widb.csv
              Figures.............................. /tmp/tmprt3848zx/job_working_directory/000/33/working/outdir/figures/
              Warnings............................. /tmp/tmprt3848zx/job_working_directory/000/33/working/outdir/log/warnings.txt
              
              $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
                  
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              comp_clust {"clusterAlg": "average", "run_tertiary_clustering": false, "steps": {"MASH_sketch": "1000", "P_ani": "0.9", "S_ani": "0.95", "__current_case__": 0, "clustering": {"S_algorithm": "ANImf", "__current_case__": 1, "coverage_method": "larger", "n_PRESET": "normal"}, "cov_thresh": "0.1", "multiround_primary_clustering": false, "primary_chunksize": "5000", "select": "default"}}
              dbkey "?"
              filter {"completeness": "1", "contamination": "25", "length": "100"}
              quality {"__current_case__": 1, "genomeInfo": {"values": [{"id": 84, "src": "hda"}]}, "source": "genomeInfo"}
              scoring {"N50_weight": "0.5", "centrality_weight": "1.0", "completeness_weight": "1.0", "contamination_weight": "5.0", "extra_weight_table": null, "size_weight": "0.0", "strain_heterogeneity_weight": "1.0"}
              select_outputs ["log", "warnings", "Primary_clustering_dendrogram", "Clustering_scatterplots", "Widb", "Chdb"]
              warning {"warn_aln": "0.25", "warn_dist": "0.25", "warn_sim": "0.98"}
      • Step 49: toolshed.g2.bx.psu.edu/repos/iuc/gtdbtk_classify_wf/gtdbtk_classify_wf/2.4.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"force": false, "min_af": "0.65", "min_perc_aa": "10", "output_process_log": false}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              gtdbtk_db "full_database_release_220_downloaded_2024-10-19"
      • Step 50: toolshed.g2.bx.psu.edu/repos/iuc/checkm2/checkm2/1.0.2+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir input_dir && ln -s '/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat' 'input_dir/50contig_reads_bin_1.fasta.dat' && ln -s '/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat' 'input_dir/50contig_reads_bin_11.fasta.dat' && ln -s '/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat' 'input_dir/50contig_reads_bin_55.fasta.dat' && ln -s '/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat' 'input_dir/50contig_reads_bin_6.fasta.dat' && checkm2 predict --input input_dir   -x .dat --threads "${GALAXY_SLOTS:-1}" --database_path '/cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd' --output-directory output

            Exit Code:

            • 0

            Standard Error:

            • [04/10/2025 09:13:19 AM] INFO: Running CheckM2 version 1.0.2
              [04/10/2025 09:13:19 AM] INFO: Custom database path provided for predict run. Checking database at /cvmfs/data.galaxyproject.org/byhand/checkm2/1.0.2/uniref100.KO.1.dmnd...
              [04/10/2025 09:13:25 AM] INFO: Running quality prediction workflow with 1 threads.
              [04/10/2025 09:13:26 AM] INFO: Calling genes in 4 bins with 1 threads:
              [04/10/2025 09:13:28 AM] INFO: Calculating metadata for 4 bins with 1 threads:
              [04/10/2025 09:13:28 AM] INFO: Annotating input genomes with DIAMOND using 1 threads
              [04/10/2025 09:16:14 AM] INFO: Processing DIAMOND output
              [04/10/2025 09:16:14 AM] INFO: Predicting completeness and contamination using ML models.
              [04/10/2025 09:16:19 AM] INFO: Parsing all results and constructing final output table.
              [04/10/2025 09:16:19 AM] INFO: CheckM2 finished successfully.
              

            Standard Output:

            •     Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
              
                  Finished processing 1 of 4 (25.00%) bin metadata.
                  Finished processing 2 of 4 (50.00%) bin metadata.
                  Finished processing 3 of 4 (75.00%) bin metadata.
                  Finished processing 4 of 4 (100.00%) bin metadata.
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              database "1.0.2"
              dbkey "?"
              genes false
              model ""
              ttable None
      • Step 6: Trimmed grouped paired reads:

        • step_state: scheduled
      • Step 51: toolshed.g2.bx.psu.edu/repos/iuc/checkm_lineage_wf/checkm_lineage_wf/1.2.3+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir 'bins' && ln -s '/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat' 'bins/50contig_reads_bin_1.fasta.fasta' && ln -s '/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat' 'bins/50contig_reads_bin_11.fasta.fasta' && ln -s '/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat' 'bins/50contig_reads_bin_55.fasta.fasta' && ln -s '/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat' 'bins/50contig_reads_bin_6.fasta.fasta' &&   checkm lineage_wf 'bins' 'output'     --unique '10' --multi '10'      --aai_strain 0.9  --e_value 1e-10 --length 0.7 --file '/tmp/tmprt3848zx/job_working_directory/000/36/outputs/dataset_182937f4-c241-4988-bec9-8fb0fcac2d91.dat' --tab_table --extension 'fasta' --threads ${GALAXY_SLOTS:-1} --pplacer_threads ${GALAXY_SLOTS:-1}

            Exit Code:

            • 0

            Standard Error:

            •     Finished processing 0 of 4 (0.00%) bins.
                  Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
                  Finished processing 0 of 4 (0.00%) bins.
                  Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
                  Finished parsing hits for 1 of 4 (25.00%) bins.
                  Finished parsing hits for 2 of 4 (50.00%) bins.
                  Finished parsing hits for 3 of 4 (75.00%) bins.
                  Finished parsing hits for 4 of 4 (100.00%) bins.
                  Finished extracting 0 of 43 (0.00%) HMMs.
                  Finished extracting 1 of 43 (2.33%) HMMs.
                  Finished extracting 2 of 43 (4.65%) HMMs.
                  Finished extracting 3 of 43 (6.98%) HMMs.
                  Finished extracting 4 of 43 (9.30%) HMMs.
                  Finished extracting 5 of 43 (11.63%) HMMs.
                  Finished extracting 6 of 43 (13.95%) HMMs.
                  Finished extracting 7 of 43 (16.28%) HMMs.
                  Finished extracting 8 of 43 (18.60%) HMMs.
                  Finished extracting 9 of 43 (20.93%) HMMs.
                  Finished extracting 10 of 43 (23.26%) HMMs.
                  Finished extracting 11 of 43 (25.58%) HMMs.
                  Finished extracting 12 of 43 (27.91%) HMMs.
                  Finished extracting 13 of 43 (30.23%) HMMs.
                  Finished extracting 14 of 43 (32.56%) HMMs.
                  Finished extracting 15 of 43 (34.88%) HMMs.
                  Finished extracting 16 of 43 (37.21%) HMMs.
                  Finished extracting 17 of 43 (39.53%) HMMs.
                  Finished extracting 18 of 43 (41.86%) HMMs.
                  Finished extracting 19 of 43 (44.19%) HMMs.
                  Finished extracting 20 of 43 (46.51%) HMMs.
                  Finished extracting 21 of 43 (48.84%) HMMs.
                  Finished extracting 22 of 43 (51.16%) HMMs.
                  Finished extracting 23 of 43 (53.49%) HMMs.
                  Finished extracting 24 of 43 (55.81%) HMMs.
                  Finished extracting 25 of 43 (58.14%) HMMs.
                  Finished extracting 26 of 43 (60.47%) HMMs.
                  Finished extracting 27 of 43 (62.79%) HMMs.
                  Finished extracting 28 of 43 (65.12%) HMMs.
                  Finished extracting 29 of 43 (67.44%) HMMs.
                  Finished extracting 30 of 43 (69.77%) HMMs.
                  Finished extracting 31 of 43 (72.09%) HMMs.
                  Finished extracting 32 of 43 (74.42%) HMMs.
                  Finished extracting 33 of 43 (76.74%) HMMs.
                  Finished extracting 34 of 43 (79.07%) HMMs.
                  Finished extracting 35 of 43 (81.40%) HMMs.
                  Finished extracting 36 of 43 (83.72%) HMMs.
                  Finished extracting 37 of 43 (86.05%) HMMs.
                  Finished extracting 38 of 43 (88.37%) HMMs.
                  Finished extracting 39 of 43 (90.70%) HMMs.
                  Finished extracting 40 of 43 (93.02%) HMMs.
                  Finished extracting 41 of 43 (95.35%) HMMs.
                  Finished extracting 42 of 43 (97.67%) HMMs.
                  Finished extracting 43 of 43 (100.00%) HMMs.
                  Finished aligning 0 of 43 (0.00%) marker genes.
                  Finished aligning 1 of 43 (2.33%) marker genes.
                  Finished aligning 2 of 43 (4.65%) marker genes.
                  Finished aligning 3 of 43 (6.98%) marker genes.
                  Finished aligning 4 of 43 (9.30%) marker genes.
                  Finished aligning 5 of 43 (11.63%) marker genes.
                  Finished aligning 6 of 43 (13.95%) marker genes.
                  Finished aligning 7 of 43 (16.28%) marker genes.
                  Finished aligning 8 of 43 (18.60%) marker genes.
                  Finished aligning 9 of 43 (20.93%) marker genes.
                  Finished aligning 10 of 43 (23.26%) marker genes.
                  Finished aligning 11 of 43 (25.58%) marker genes.
                  Finished aligning 12 of 43 (27.91%) marker genes.
                  Finished aligning 13 of 43 (30.23%) marker genes.
                  Finished aligning 14 of 43 (32.56%) marker genes.
                  Finished aligning 15 of 43 (34.88%) marker genes.
                  Finished aligning 16 of 43 (37.21%) marker genes.
                  Finished aligning 17 of 43 (39.53%) marker genes.
                  Finished aligning 18 of 43 (41.86%) marker genes.
                  Finished aligning 19 of 43 (44.19%) marker genes.
                  Finished aligning 20 of 43 (46.51%) marker genes.
                  Finished aligning 21 of 43 (48.84%) marker genes.
                  Finished aligning 22 of 43 (51.16%) marker genes.
                  Finished aligning 23 of 43 (53.49%) marker genes.
                  Finished aligning 24 of 43 (55.81%) marker genes.
                  Finished aligning 25 of 43 (58.14%) marker genes.
                  Finished aligning 26 of 43 (60.47%) marker genes.
                  Finished aligning 27 of 43 (62.79%) marker genes.
                  Finished aligning 28 of 43 (65.12%) marker genes.
                  Finished aligning 29 of 43 (67.44%) marker genes.
                  Finished aligning 30 of 43 (69.77%) marker genes.
                  Finished aligning 31 of 43 (72.09%) marker genes.
                  Finished aligning 32 of 43 (74.42%) marker genes.
                  Finished aligning 33 of 43 (76.74%) marker genes.
                  Finished aligning 34 of 43 (79.07%) marker genes.
                  Finished aligning 35 of 43 (81.40%) marker genes.
                  Finished aligning 36 of 43 (83.72%) marker genes.
                  Finished aligning 37 of 43 (86.05%) marker genes.
                  Finished aligning 38 of 43 (88.37%) marker genes.
                  Finished aligning 39 of 43 (90.70%) marker genes.
                  Finished aligning 40 of 43 (93.02%) marker genes.
                  Finished aligning 41 of 43 (95.35%) marker genes.
                  Finished aligning 42 of 43 (97.67%) marker genes.
                  Finished aligning 43 of 43 (100.00%) marker genes.
                  Finished parsing hits for 1 of 4 (25.00%) bins.
                  Finished parsing hits for 2 of 4 (50.00%) bins.
                  Finished parsing hits for 3 of 4 (75.00%) bins.
                  Finished parsing hits for 4 of 4 (100.00%) bins.
              
                  Finished processing 1 of 4 (25.00%) bins (current: 50contig_reads_bin_55.fasta).
                                                                                                  
                  Finished processing 2 of 4 (50.00%) bins (current: 50contig_reads_bin_6.fasta).
                                                                                                 
                  Finished processing 3 of 4 (75.00%) bins (current: 50contig_reads_bin_11.fasta).
                                                                                                  
                  Finished processing 4 of 4 (100.00%) bins (current: 50contig_reads_bin_1.fasta).
                  Finished processing 0 of 4 (0.00%) bins.
                  Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
                  Finished parsing hits for 1 of 4 (25.00%) bins.
                  Finished parsing hits for 2 of 4 (50.00%) bins.
                  Finished parsing hits for 3 of 4 (75.00%) bins.
                  Finished parsing hits for 4 of 4 (100.00%) bins.
                  Finished processing 0 of 4 (0.00%) bins.
                  Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
                  Finished processing 0 of 4 (0.00%) bins.
                  Finished processing 1 of 4 (25.00%) bins.
                  Finished processing 2 of 4 (50.00%) bins.
                  Finished processing 3 of 4 (75.00%) bins.
                  Finished processing 4 of 4 (100.00%) bins.
                  Finished parsing hits for 1 of 4 (25.00%) bins.
                  Finished parsing hits for 2 of 4 (50.00%) bins.
                  Finished parsing hits for 3 of 4 (75.00%) bins.
                  Finished parsing hits for 4 of 4 (100.00%) bins.
              

            Standard Output:

            • [2025-04-10 09:13:48] INFO: CheckM v1.2.3
              [2025-04-10 09:13:48] INFO: checkm lineage_wf bins output --unique 10 --multi 10 --aai_strain 0.9 --e_value 1e-10 --length 0.7 --file /tmp/tmprt3848zx/job_working_directory/000/36/outputs/dataset_182937f4-c241-4988-bec9-8fb0fcac2d91.dat --tab_table --extension fasta --threads 1 --pplacer_threads 1
              [2025-04-10 09:13:48] INFO: CheckM data: /usr/local/checkm_data
              [2025-04-10 09:13:48] INFO: [CheckM - tree] Placing bins in reference genome tree.
              [2025-04-10 09:13:48] INFO: Identifying marker genes in 4 bins with 1 threads:
              [2025-04-10 09:13:50] INFO: Saving HMM info to file.
              [2025-04-10 09:13:50] INFO: Calculating genome statistics for 4 bins with 1 threads:
              [2025-04-10 09:13:50] INFO: Extracting marker genes to align.
              [2025-04-10 09:13:50] INFO: Parsing HMM hits to marker genes:
              [2025-04-10 09:13:50] INFO: Extracting 43 HMMs with 1 threads:
              [2025-04-10 09:13:51] INFO: Aligning 43 marker genes with 1 threads:
              [2025-04-10 09:13:51] INFO: Reading marker alignment files.
              [2025-04-10 09:13:51] INFO: Concatenating alignments.
              [2025-04-10 09:13:51] INFO: Placing 4 bins into the genome tree with pplacer (be patient).
              [2025-04-10 09:14:28] INFO: { Current stage: 0:00:40.240 || Total: 0:00:40.240 }
              [2025-04-10 09:14:28] INFO: [CheckM - lineage_set] Inferring lineage-specific marker sets.
              [2025-04-10 09:14:28] INFO: Reading HMM info from file.
              [2025-04-10 09:14:28] INFO: Parsing HMM hits to marker genes:
              [2025-04-10 09:14:29] INFO: Determining marker sets for each genome bin.
              [2025-04-10 09:14:29] INFO: Marker set written to: output/lineage.ms
              [2025-04-10 09:14:29] INFO: { Current stage: 0:00:00.827 || Total: 0:00:41.067 }
              [2025-04-10 09:14:29] INFO: [CheckM - analyze] Identifying marker genes in bins.
              [2025-04-10 09:14:29] INFO: Identifying marker genes in 4 bins with 1 threads:
              [2025-04-10 09:14:37] INFO: Saving HMM info to file.
              [2025-04-10 09:14:37] INFO: { Current stage: 0:00:07.859 || Total: 0:00:48.927 }
              [2025-04-10 09:14:37] INFO: Parsing HMM hits to marker genes:
              [2025-04-10 09:14:37] INFO: Aligning marker genes with multiple hits in a single bin:
              [2025-04-10 09:14:37] INFO: { Current stage: 0:00:00.127 || Total: 0:00:49.055 }
              [2025-04-10 09:14:37] INFO: Calculating genome statistics for 4 bins with 1 threads:
              [2025-04-10 09:14:37] INFO: { Current stage: 0:00:00.027 || Total: 0:00:49.082 }
              [2025-04-10 09:14:37] INFO: [CheckM - qa] Tabulating genome statistics.
              [2025-04-10 09:14:37] INFO: Calculating AAI between multi-copy marker genes.
              [2025-04-10 09:14:37] INFO: Reading HMM info from file.
              [2025-04-10 09:14:37] INFO: Parsing HMM hits to marker genes:
              [2025-04-10 09:14:37] INFO: QA information written to: /tmp/tmprt3848zx/job_working_directory/000/36/outputs/dataset_182937f4-c241-4988-bec9-8fb0fcac2d91.dat
              [2025-04-10 09:14:37] INFO: { Current stage: 0:00:00.110 || Total: 0:00:49.193 }
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              bins {"__current_case__": 0, "bins_coll": {"values": [{"id": 53, "src": "hdca"}]}, "select": "collection"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              extra_outputs None
              lineage_set {"force_domain": false, "multi": "10", "no_refinement": false, "unique": "10"}
              qa {"aai_strain": "0.9", "e_value": "1e-10", "ignore_thresholds": false, "individual_markers": false, "length": "0.7", "skip_adj_correction": false, "skip_pseudogene_correction": false}
              tree_analyze {"ali": false, "genes": false, "nt": false, "reduced_tree": false}
      • Step 52: toolshed.g2.bx.psu.edu/repos/iuc/coverm_genome/coverm_genome/0.7.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir 'single/' && mkdir 'fw/' && mkdir 'rv/' && mkdir 'interl/' && mkdir 'ref/' && mkdir 'bam/' &&   ln -s '/tmp/tmprt3848zx/files/7/9/a/dataset_79a7ee8f-7135-4f1f-8b89-90780f1339fe.dat' 'fw/50contig_reads' && ln -s '/tmp/tmprt3848zx/files/a/b/6/dataset_ab64de63-9e95-40b1-b42e-88ebf6636371.dat' 'rv/50contig_reads' &&   echo "GENOME_FOR_READS mapped.mode.genome.genomic.source=history" && echo "GENOME_FOR_READS mapped.mode.genome.genomic.genome_fasta_files=/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat,/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat,/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat,/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat" && ln -s '/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat' '50contig_reads_bin_1.fasta' && ln -s '/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat' '50contig_reads_bin_11.fasta' && ln -s '/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat' '50contig_reads_bin_55.fasta' && ln -s '/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat' '50contig_reads_bin_6.fasta' &&   mkdir 'representative-fasta/' && coverm genome -1 'fw/50contig_reads' -2 'rv/50contig_reads'  --mapper 'minimap2-sr' --genome-fasta-files '50contig_reads_bin_1.fasta' '50contig_reads_bin_11.fasta' '50contig_reads_bin_55.fasta' '50contig_reads_bin_6.fasta'   --min-read-aligned-length 0 --min-read-percent-identity 0.0 --min-read-aligned-percent 0.0   --methods 'relative_abundance' --min-covered-fraction 10 --contig-end-exclusion 75 --trim-min 5 --trim-max 95    --output-format 'dense' --output-file '/tmp/tmprt3848zx/job_working_directory/000/37/outputs/dataset_4b61af85-2fd5-4880-b3da-c60367d51d8c.dat'  --threads ${GALAXY_SLOTS:-1}

            Exit Code:

            • 0

            Standard Error:

            • [2025-04-10T09:13:31Z INFO  bird_tool_utils::clap_utils] CoverM version 0.7.0
              [2025-04-10T09:13:31Z INFO  coverm] Writing output to file: /tmp/tmprt3848zx/job_working_directory/000/37/outputs/dataset_4b61af85-2fd5-4880-b3da-c60367d51d8c.dat
              [2025-04-10T09:13:31Z INFO  coverm] Using min-covered-fraction 10%
              [2025-04-10T09:13:31Z INFO  coverm] Using min-read-percent-identity 0%
              [2025-04-10T09:13:31Z INFO  coverm] Using min-read-aligned-percent 0%
              [2025-04-10T09:13:31Z INFO  bird_tool_utils::external_command_checker] Found minimap2 version 2.28-r1209 
              [2025-04-10T09:13:31Z INFO  bird_tool_utils::external_command_checker] Found samtools version 1.21 
              [2025-04-10T09:13:31Z INFO  coverm] Profiling 4 genomes
              [2025-04-10T09:13:31Z INFO  coverm] Generating concatenated reference FASTA file of 4 genomes ..
              [2025-04-10T09:13:31Z INFO  coverm] Not pre-generating minimap2 index
              [2025-04-10T09:13:31Z INFO  coverm] Using min-read-percent-identity 0%
              [2025-04-10T09:13:31Z INFO  coverm] Using min-read-aligned-percent 0%
              [2025-04-10T09:13:32Z INFO  coverm::genome] In sample '50contig_reads', found 18791 reads mapped out of 18924 total (99.30%)
              

            Standard Output:

            • GENOME_FOR_READS mapped.mode.genome.genomic.source=history
              GENOME_FOR_READS mapped.mode.genome.genomic.genome_fasta_files=/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat,/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat,/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat,/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              alignment {"exclude_supplementary": false, "min_read_aligned_length": "0", "min_read_aligned_percent": "0.0", "min_read_percent_identity": "0.0", "proper_pairs_only": {"__current_case__": 1, "proper_pairs_only": ""}}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cov {"contig_end_exclusion": "75", "methods": ["relative_abundance"], "min_covered_fraction": "10", "trim_max": "95", "trim_min": "5"}
              dbkey "?"
              derep {"checkm_tab_table": null, "dereplicate": {"__current_case__": 1, "dereplicate": ""}, "genome_info": null, "max_contamination": null, "min_completeness": null}
              exclude_genomes_from_deshard false
              mapped {"__current_case__": 1, "mapped": "not-mapped", "mapper": "minimap2-sr", "mode": {"__current_case__": 0, "genome": {"__current_case__": 1, "genomic": {"__current_case__": 0, "genome_fasta_files": {"values": [{"id": 53, "src": "hdca"}]}, "source": "history"}, "ref_or_genome": "genomic"}, "mode": "individual", "read_type": {"__current_case__": 2, "paired_reads": {"values": [{"id": 4, "src": "dce"}]}, "type": "paired_collection"}}}
              out {"dereplication_output_cluster_definition": false, "dereplication_output_representative_fasta_directory_copy": false, "no_zeros": false, "output_format": "dense"}
      • Step 53: toolshed.g2.bx.psu.edu/repos/iuc/quast/quast/5.3.0+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo 50contig_reads_bin_1_fasta &&    metaquast  --labels '50contig_reads_bin_1_fasta' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/38/outputs/dataset_57758adf-d8de-476a-88f0-16563ce1b032_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/38/outputs/dataset_57758adf-d8de-476a-88f0-16563ce1b032_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/38/outputs/dataset_57758adf-d8de-476a-88f0-16563ce1b032_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads_bin_1_fasta
              /usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_1_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:04
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat ==> 50contig_reads_bin_1_fasta
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat -o /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir --labels 50contig_reads_bin_1_fasta
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:05
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmprt3848zx/job_working_directory/000/38/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/5/b/7/dataset_5b725d9e-0214-472e-ac90-3302b11af51c.dat ==> 50contig_reads_bin_1_fasta
              
              2025-04-10 09:13:05
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads_bin_1_fasta
                Calculating N50 and L50...
                  50contig_reads_bin_1_fasta, N50 = 1357, L50 = 1, auN = 1357.0, Total length = 1357, GC % = 38.76, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads_bin_1_fasta GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/basic_stats/50contig_reads_bin_1_fasta_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-04-10 09:13:06
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-04-10 09:13:06
              RESULTS:
                Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/icarus.html
                Log is saved to /tmp/tmprt3848zx/job_working_directory/000/38/working/outputdir/quast.log
              
              Finished: 2025-04-10 09:13:06
              Elapsed time: 0:00:01.437248
              NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 93, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
          • Job 2:

            • Job state is ok

            Command Line:

            • echo 50contig_reads_bin_11_fasta &&    metaquast  --labels '50contig_reads_bin_11_fasta' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/39/outputs/dataset_f92d3a11-d39b-4e6e-8753-d594a0290718_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/39/outputs/dataset_f92d3a11-d39b-4e6e-8753-d594a0290718_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/39/outputs/dataset_f92d3a11-d39b-4e6e-8753-d594a0290718_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads_bin_11_fasta
              /usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_11_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:04
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat ==> 50contig_reads_bin_11_fasta
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat -o /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir --labels 50contig_reads_bin_11_fasta
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:05
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmprt3848zx/job_working_directory/000/39/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/8/0/9/dataset_809ece5c-f028-4d72-913f-60ccd0f9b71a.dat ==> 50contig_reads_bin_11_fasta
              
              2025-04-10 09:13:05
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads_bin_11_fasta
                Calculating N50 and L50...
                  50contig_reads_bin_11_fasta, N50 = 2275, L50 = 1, auN = 2275.0, Total length = 2275, GC % = 38.42, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads_bin_11_fasta GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/basic_stats/50contig_reads_bin_11_fasta_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-04-10 09:13:06
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-04-10 09:13:06
              RESULTS:
                Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/icarus.html
                Log is saved to /tmp/tmprt3848zx/job_working_directory/000/39/working/outputdir/quast.log
              
              Finished: 2025-04-10 09:13:06
              Elapsed time: 0:00:01.454717
              NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 94, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
          • Job 3:

            • Job state is ok

            Command Line:

            • echo 50contig_reads_bin_55_fasta &&    metaquast  --labels '50contig_reads_bin_55_fasta' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/40/outputs/dataset_ed59b0db-0841-4df9-8461-61a1bad95807_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/40/outputs/dataset_ed59b0db-0841-4df9-8461-61a1bad95807_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/40/outputs/dataset_ed59b0db-0841-4df9-8461-61a1bad95807_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads_bin_55_fasta
              /usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_55_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:04
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat ==> 50contig_reads_bin_55_fasta
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat -o /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir --labels 50contig_reads_bin_55_fasta
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:05
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmprt3848zx/job_working_directory/000/40/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/f/0/c/dataset_f0c33f60-365a-44bf-b5ad-f1584230167d.dat ==> 50contig_reads_bin_55_fasta
              
              2025-04-10 09:13:05
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads_bin_55_fasta
                Calculating N50 and L50...
                  50contig_reads_bin_55_fasta, N50 = 5014, L50 = 21, auN = 5279.4, Total length = 258860, GC % = 36.41, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads_bin_55_fasta GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/basic_stats/50contig_reads_bin_55_fasta_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-04-10 09:13:06
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-04-10 09:13:06
              RESULTS:
                Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/icarus.html
                Log is saved to /tmp/tmprt3848zx/job_working_directory/000/40/working/outputdir/quast.log
              
              Finished: 2025-04-10 09:13:06
              Elapsed time: 0:00:01.462968
              NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 95, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
          • Job 4:

            • Job state is ok

            Command Line:

            • echo 50contig_reads_bin_6_fasta &&    metaquast  --labels '50contig_reads_bin_6_fasta' -o 'outputdir'  --max-ref-num 0  --min-identity 90.0 --min-contig 500        --min-alignment 65 --ambiguity-usage 'one' --ambiguity-score 0.99   --local-mis-size 200   --contig-thresholds '0,1000'  --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500   --x-for-Nx 90  '/tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat' --threads ${GALAXY_SLOTS:-1}  && if [[ -f "outputdir/report.tsv" ]]; then mkdir -p "outputdir/combined_reference/" && cp "outputdir/report.tsv" "outputdir/combined_reference/report.tsv"; fi && if [[ -f "outputdir/report.html" ]]; then mkdir -p "outputdir/combined_reference/" && cp outputdir/*.html "outputdir/combined_reference/"; fi && mkdir -p '/tmp/tmprt3848zx/job_working_directory/000/41/outputs/dataset_5e8042cf-3f2b-42bf-b28e-786730d1814f_files' && cp outputdir/combined_reference/*.html '/tmp/tmprt3848zx/job_working_directory/000/41/outputs/dataset_5e8042cf-3f2b-42bf-b28e-786730d1814f_files' && if [[ -d "outputdir/icarus_viewers" ]]; then cp -R outputdir/icarus_viewers 'outputdir/combined_reference/'; fi && if [[ -d "outputdir/combined_reference/icarus_viewers" ]]; then cp -R outputdir/combined_reference/icarus_viewers '/tmp/tmprt3848zx/job_working_directory/000/41/outputs/dataset_5e8042cf-3f2b-42bf-b28e-786730d1814f_files'; fi && if [[ -d "outputdir/krona_charts/" ]]; then mkdir -p 'None' && cp outputdir/krona_charts/*.html 'None'; fi

            Exit Code:

            • 0

            Standard Output:

            • 50contig_reads_bin_6_fasta
              /usr/local/opt/quast-5.3.0/metaquast.py --labels 50contig_reads_bin_6_fasta -o outputdir --max-ref-num 0 --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat --threads 1
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:04
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/metaquast.log
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              INFO	generated new fontManager
              INFO	generated new fontManager
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat ==> 50contig_reads_bin_6_fasta
              
              NOTICE: Maximum number of references (--max-ref-number) is set to 0, search in SILVA 16S rRNA database is disabled
              
              NOTICE: No references are provided, starting regular QUAST with MetaGeneMark gene finder
              /usr/local/opt/quast-5.3.0/quast.py --min-identity 90.0 --min-contig 500 --min-alignment 65 --ambiguity-usage one --ambiguity-score 0.99 --local-mis-size 200 --contig-thresholds 0,1000 --extensive-mis-size 1000 --scaffold-gap-max-size 1000 --unaligned-part-size 500 --x-for-Nx 90 --threads 1 /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat -o /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir --labels 50contig_reads_bin_6_fasta
              
              Version: 5.3.0
              
              System information:
                OS: Linux-6.8.0-1021-azure-x86_64-with-glibc2.36 (linux_64)
                Python version: 3.12.3
                CPUs number: 4
              
              Started: 2025-04-10 09:13:05
              
              Logging to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/quast.log
              NOTICE: Output directory already exists and looks like a QUAST output dir. Existing results can be reused (e.g. previously generated alignments)!
              WARNING: --ambiguity-usage was set to 'all' because not default --ambiguity-score was specified
              
              CWD: /tmp/tmprt3848zx/job_working_directory/000/41/working
              Main parameters: 
                MODE: meta, threads: 1, min contig length: 500, min alignment length: 65, min alignment IDY: 90.0, \
                ambiguity: all, min local misassembly length: 200, min extensive misassembly length: 1000
              
              Contigs:
                Pre-processing...
                /tmp/tmprt3848zx/files/f/b/e/dataset_fbe5c0d6-af8a-4e6d-b0ca-46c3391bda82.dat ==> 50contig_reads_bin_6_fasta
              
              2025-04-10 09:13:05
              Running Basic statistics processor...
                Contig files: 
                  50contig_reads_bin_6_fasta
                Calculating N50 and L50...
                  50contig_reads_bin_6_fasta, N50 = 1469, L50 = 1, auN = 1469.0, Total length = 1469, GC % = 38.67, # N's per 100 kbp =  0.00
                Drawing Nx plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/Nx_plot.pdf
                Drawing cumulative plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/cumulative_plot.pdf
                Drawing GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/GC_content_plot.pdf
                Drawing 50contig_reads_bin_6_fasta GC content plot...
                  saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/basic_stats/50contig_reads_bin_6_fasta_GC_content_plot.pdf
              Done.
              
              NOTICE: Genes are not predicted by default. Use --gene-finding or --glimmer option to enable it.
              
              2025-04-10 09:13:06
              Creating large visual summaries...
              This may take a while: press Ctrl-C to skip this step..
                1 of 2: Creating PDF with all tables and plots...
                2 of 2: Creating Icarus viewers...
              Done
              
              2025-04-10 09:13:06
              RESULTS:
                Text versions of total report are saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/report.txt, report.tsv, and report.tex
                Text versions of transposed total report are saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/transposed_report.txt, transposed_report.tsv, and transposed_report.tex
                HTML version (interactive tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/report.html
                PDF version (tables and plots) is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/report.pdf
                Icarus (contig browser) is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/icarus.html
                Log is saved to /tmp/tmprt3848zx/job_working_directory/000/41/working/outputdir/quast.log
              
              Finished: 2025-04-10 09:13:06
              Elapsed time: 0:00:01.445620
              NOTICEs: 2; WARNINGs: 1; non-fatal ERRORs: 0
              
              Thank you for using QUAST!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              advanced {"contig_thresholds": "0,1000", "extensive_mis_size": "1000", "fragmented_max_indent": null, "report_all_metrics": false, "scaffold_gap_max_size": "1000", "skip_unaligned_mis_contigs": true, "strict_NA": false, "unaligned_part_size": "500", "x_for_Nx": "90"}
              alignments {"ambiguity_score": "0.99", "ambiguity_usage": "one", "fragmented": false, "local_mis_size": "200", "min_alignment": "65", "upper_bound_assembly": false, "upper_bound_min_con": null, "use_all_alignments": false}
              assembly {"__current_case__": 1, "min_identity": "90.0", "ref": {"__current_case__": 0, "max_ref_num": "0", "origin": "silva"}, "reuse_combined_alignments": false, "type": "metagenome"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              genes {"conserved_genes_finding": false, "gene_finding": {"__current_case__": 0, "tool": "none"}, "rna_finding": false}
              large false
              min_contig "500"
              mode {"__current_case__": 0, "in": {"__current_case__": 1, "custom": "false", "inputs": {"values": [{"id": 96, "src": "dce"}]}}, "mode": "individual", "reads": {"__current_case__": 0, "reads_option": "disabled"}}
              output_files ["html", "tabular", "summary", "krona"]
              split_scaffolds false
      • Step 54: toolshed.g2.bx.psu.edu/repos/iuc/bakta/bakta/1.9.4+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 93, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
          • Job 2:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 94, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
          • Job 3:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 95, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
          • Job 4:

            • Job state is skipped

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              annotation {"complete": false, "compliant": false, "keep_contig_headers": false, "meta": false, "prodigal": null, "proteins": null, "regions": null, "replicons": null, "translation_table": "11"}
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"amrfinder_db_select": "amrfinderplus_V3.12_2024-05-02.2", "bakta_db_select": "V5.1_2024-01-19", "input_file": {"values": [{"id": 96, "src": "dce"}]}, "min_contig_length": null}
              organism {"genus": null, "plasmid": null, "species": null, "strain": null}
              output_files {"output_selection": ["file_tsv", "file_gff3", "file_ffn", "file_plot", "sum_txt"]}
              workflow {"skip_analysis": null}
      • Step 55: toolshed.g2.bx.psu.edu/repos/iuc/collection_column_join/collection_column_join/0.0.3:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • sh '/tmp/tmprt3848zx/job_working_directory/000/46/configs/tmp3xw2lolz'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              fill_char "."
              has_header "1"
              identifier_column "1"
              include_outputs None
              old_col_in_header true
      • Step 56: toolshed.g2.bx.psu.edu/repos/iuc/multiqc/multiqc/1.27+galaxy3:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • die() { echo "$@" 1>&2 ; exit 1; } &&  mkdir multiqc_WDir &&   mkdir multiqc_WDir/quast_0 && mkdir 'multiqc_WDir/quast_0/file_0' && ln -s '/tmp/tmprt3848zx/files/4/2/1/dataset_421e33f0-0f17-4c03-a10d-dd617c655623.dat' 'multiqc_WDir/quast_0/file_0/report.tsv' && mkdir 'multiqc_WDir/quast_0/file_1' && ln -s '/tmp/tmprt3848zx/files/b/f/8/dataset_bf84ecbb-8a5b-4c33-a996-a18aa9038b44.dat' 'multiqc_WDir/quast_0/file_1/report.tsv' && mkdir 'multiqc_WDir/quast_0/file_2' && ln -s '/tmp/tmprt3848zx/files/e/d/7/dataset_ed77817e-f289-4fae-bf42-c966814ecc21.dat' 'multiqc_WDir/quast_0/file_2/report.tsv' && mkdir 'multiqc_WDir/quast_0/file_3' && ln -s '/tmp/tmprt3848zx/files/e/6/8/dataset_e686e8aa-f208-4426-a853-338b3f87c309.dat' 'multiqc_WDir/quast_0/file_3/report.tsv' && mkdir multiqc_WDir/custom_content_1 && ln -s '/tmp/tmprt3848zx/files/a/6/e/dataset_a6eecb3b-b66d-45d3-9f7b-846b6e33a77a.dat' 'multiqc_WDir/custom_content_1/file_1_0' && more /tmp/tmprt3848zx/files/a/6/e/dataset_a6eecb3b-b66d-45d3-9f7b-846b6e33a77a.dat && mkdir multiqc_WDir/custom_content_2 && ln -s '/tmp/tmprt3848zx/files/1/c/8/dataset_1c8d4f04-8d75-4fa8-af8a-2245a2445c57.dat' 'multiqc_WDir/custom_content_2/file_2_0' && more /tmp/tmprt3848zx/files/1/c/8/dataset_1c8d4f04-8d75-4fa8-af8a-2245a2445c57.dat && mkdir multiqc_WDir/custom_content_3 && mkdir multiqc_WDir/bakta_4 &&     ln -s '/tmp/tmprt3848zx/files/3/5/3/dataset_3532e158-fd03-4e9d-9abb-f4b98fd32c84.dat' 'multiqc_WDir/bakta_4/50contig_reads_bin_1_fasta.txt' &&    ln -s '/tmp/tmprt3848zx/files/4/9/2/dataset_4926ecaa-5c5d-4384-ac34-acb9e189eb6c.dat' 'multiqc_WDir/bakta_4/50contig_reads_bin_11_fasta.txt' &&    ln -s '/tmp/tmprt3848zx/files/a/9/b/dataset_a9bb8a73-2816-4b1a-b75c-36bbda065fb1.dat' 'multiqc_WDir/bakta_4/50contig_reads_bin_55_fasta.txt' &&    ln -s '/tmp/tmprt3848zx/files/3/d/2/dataset_3d28f2c5-dfd4-435c-9685-cc74f02499d2.dat' 'multiqc_WDir/bakta_4/50contig_reads_bin_6_fasta.txt' &&  mkdir multiqc_WDir/custom_content_5 && ln -s '/tmp/tmprt3848zx/files/1/8/2/dataset_182937f4-c241-4988-bec9-8fb0fcac2d91.dat' 'multiqc_WDir/custom_content_5/file_5_0' && more /tmp/tmprt3848zx/files/1/8/2/dataset_182937f4-c241-4988-bec9-8fb0fcac2d91.dat &&   multiqc multiqc_WDir --filename 'report'      --config '/tmp/tmprt3848zx/job_working_directory/000/47/configs/tmpu72v_u3o'  && mkdir -p ./plots && ls -l ./report_data/ && cp ./report_data/*plot*.txt ./plots/ | true

            Exit Code:

            • 0

            Standard Error:

            • /// MultiQC 🔍 v1.27
              
                          config | Loading config settings from: /tmp/tmprt3848zx/job_working_directory/000/47/configs/tmpu72v_u3o
                   version_check | MultiQC Version v1.28 now available!
                     file_search | Search path: /tmp/tmprt3848zx/job_working_directory/000/47/working/multiqc_WDir
              
                  custom_content | section_1: Found 4 samples (PlotType.TABLE)
                  custom_content | section_2: Found 5 samples (PlotType.TABLE)
                  custom_content | section_5: Found 4 samples (PlotType.TABLE)
                           quast | Found 4 reports
              
                   write_results | Data        : report_data
                   write_results | Report      : report.html
                         multiqc | MultiQC complete
              cp: cannot stat './report_data/*plot*.txt': No such file or directory
              

            Standard Output:

            • ::::::::::::::
              /tmp/tmprt3848zx/files/a/6/e/dataset_a6eecb3b-b66d-45d3-9f7b-846b6e33a77a.dat
              ::::::::::::::
              Name	Completeness	Contamination	Completeness_Model_Used	Translation_Table_Used	Coding_Density	Contig_N50	Average_Gene_Length	Genome_Size	GC_Content	Total_Coding_Sequences	Total_Contigs	Max_Contig_Length	Additional_Notes
              50contig_reads_bin_1.fasta	6.65	0.0	Gradient Boost (General Model)	11	0.693	1357	157.0	1357	0.39	2	1	1357	None
              50contig_reads_bin_11.fasta	5.68	0.0	Gradient Boost (General Model)	11	0.941	2275	357.0	2275	0.38	2	1	2275	None
              50contig_reads_bin_55.fasta	16.77	0.02	Neural Network (Specific Model)	11	0.916	5014	281.8505338078292	258860	0.36	281	56	11001	None
              50contig_reads_bin_6.fasta	6.41	0.0	Gradient Boost (General Model)	11	0.908	1469	222.5	1469	0.39	2	1	1469	None
              ::::::::::::::
              /tmp/tmprt3848zx/files/1/c/8/dataset_1c8d4f04-8d75-4fa8-af8a-2245a2445c57.dat
              ::::::::::::::
              Genome	50contig_reads_50contig_reads Relative Abundance (%)
              50contig_reads_bin_1	24.097418
              50contig_reads_bin_11	26.34404
              50contig_reads_bin_55	25.653187
              50contig_reads_bin_6	23.202549
              unmapped	0.7028103
              ::::::::::::::
              /tmp/tmprt3848zx/files/1/8/2/dataset_182937f4-c241-4988-bec9-8fb0fcac2d91.dat
              ::::::::::::::
              Bin Id	Marker lineage	# genomes	# markers	# marker sets	0	1	2	3	4	5+	Completeness	Contamination	Strain heterogeneity
              50contig_reads_bin_1.fasta	root (UID1)	5656	56	24	56	0	0	0	0	0	0.00	0.00	0.00
              50contig_reads_bin_11.fasta	root (UID1)	5656	56	24	56	0	0	0	0	0	0.00	0.00	0.00
              50contig_reads_bin_55.fasta	k__Bacteria (UID203)	5449	103	58	88	15	0	0	0	0	17.40	0.00	0.00
              50contig_reads_bin_6.fasta	root (UID1)	5656	56	24	56	0	0	0	0	0	0.00	0.00	0.00
              total 300
              -rw-r--r-- 1 1001 118    977 Apr 10 09:16 multiqc.log
              -rw-r--r-- 1 1001 118    121 Apr 10 09:16 multiqc_citations.txt
              -rw-r--r-- 1 1001 118 263513 Apr 10 09:16 multiqc_data.json
              -rw-r--r-- 1 1001 118    205 Apr 10 09:16 multiqc_general_stats.txt
              -rw-r--r-- 1 1001 118    620 Apr 10 09:16 multiqc_quast.txt
              -rw-r--r-- 1 1001 118    689 Apr 10 09:16 multiqc_section_1_table.txt
              -rw-r--r-- 1 1001 118    204 Apr 10 09:16 multiqc_section_2_table.txt
              -rw-r--r-- 1 1001 118    431 Apr 10 09:16 multiqc_section_5_table.txt
              -rw-r--r-- 1 1001 118    592 Apr 10 09:16 multiqc_sources.txt
              -rw-r--r-- 1 1001 118    145 Apr 10 09:16 quast_num_contigs.txt
              -rw-r--r-- 1 1001 118    277 Apr 10 09:16 quast_table.txt
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "85d8fbb215e611f0aeac6045bd812dcf"
              chromInfo "/tmp/tmprt3848zx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              comment ""
              dbkey "?"
              export false
              flat false
              image_content_input None
              results [{"__index__": 0, "software_cond": {"__current_case__": 21, "input": {"values": [{"id": 63, "src": "hdca"}]}, "software": "quast"}}, {"__index__": 1, "software_cond": {"__current_case__": 46, "description": null, "input": {"values": [{"id": 100, "src": "hda"}]}, "plot_type": "table", "section_name": "CheckM2 Bin quality", "software": "custom_content", "title": null, "xlab": null, "ylab": null}}, {"__index__": 2, "software_cond": {"__current_case__": 46, "description": null, "input": {"values": [{"id": 131, "src": "hda"}]}, "plot_type": "table", "section_name": "CoverM Bin coverage", "software": "custom_content", "title": null, "xlab": null, "ylab": null}}, {"__index__": 3, "software_cond": {"__current_case__": 46, "description": null, "input": {"values": [{"id": 57, "src": "hdca"}]}, "plot_type": "table", "section_name": "GTDB-Tk taxonomy", "software": "custom_content", "title": null, "xlab": null, "ylab": null}}, {"__index__": 4, "software_cond": {"__current_case__": 34, "input": {"values": [{"id": 68, "src": "hdca"}]}, "software": "bakta"}}, {"__index__": 5, "software_cond": {"__current_case__": 46, "description": null, "input": {"values": [{"id": 101, "src": "hda"}]}, "plot_type": "table", "section_name": " CheckM Bin quality", "software": "custom_content", "title": null, "xlab": null, "ylab": null}}]
              title ""
      • Step 7: Trimmed sample paired reads:

        • step_state: scheduled
      • Step 8: Contamination weight (Binette):

        • step_state: scheduled
      • Step 9: CheckM2 Database for Binette:

        • step_state: scheduled
      • Step 10: Minimum MAG completeness percentage:

        • step_state: scheduled
    • Other invocation details
      • history_id

        • 3aa8f39cd3274f86
      • history_state

        • ok
      • invocation_id

        • 3aa8f39cd3274f86
      • invocation_state

        • scheduled
      • workflow_id

        • 3aa8f39cd3274f86

Copy link

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ MAGs-generation.ga_0

    Execution Problem:

    • Unexpected HTTP status code: 400: {"err_msg":"Workflow cannot be run because input step '41' (CheckM2 Database) is not optional and no input provided.","err_code":0}
      

@paulzierep
Copy link
Contributor Author

Hi @mvdbeek, it seems the latest test was successful: https://github.com/galaxyproject/iwc/actions/runs/14382300139?pr=769
Was the explanation for Trimmed reads / Grouped trimmed reads OK ? Can we do anything else to get this further ?

@paulzierep paulzierep requested a review from mvdbeek April 22, 2025 11:14
Copy link
Member

@mvdbeek mvdbeek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thank you!

@mvdbeek mvdbeek enabled auto-merge April 28, 2025 13:01
@paulzierep
Copy link
Contributor Author

what needs to be done to finish: Check workflow success Expected — Waiting for status to be reported -sorry for the rush, it would be great to have this merged for project reporting

@mvdbeek mvdbeek disabled auto-merge April 29, 2025 08:30
@mvdbeek mvdbeek closed this Apr 29, 2025
@mvdbeek mvdbeek reopened this Apr 29, 2025
@paulzierep
Copy link
Contributor Author

Ready for merge ?

@mvdbeek mvdbeek merged commit 487c7c8 into galaxyproject:main Apr 29, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants