2.19.0 (2024-07-28)
- GTDB V9 R220
- Spades v4
2.18.2 (2024-06-28)
Fix error with downloading DRAM. Update to DRAM v1.5
- Qc reads, assembly are now written in the sample.tsv from the start. This should fix errors of partial writing to the sample.tsv #695
- It also allows you to add external assemblies.
- singletons reads are no longer used trough the pipeline.
- This changes the default paths for raw reads and assemblies.
assembly are now in
Assembly/fasta/{sample}.fasta
reads:QC/reads/{sample}_{fraction}.fastq.gz
Seemless update: If you update atlas and continue on an old project. Your old files will be copied. Or the path defined in the sample.tsv will be used.
The tool Skani claims to be better and faster than the combination of mash + FastANI as used by dRep
I implemented the skin for species clustering.
We now do the species clustering in the atlas run binning
step.
So you get information about the number of dereplicated species in the binning report. This allows you to run different binners before choosing the one to use for the genome annotation.
Also, the file storage was improved all important files are in Binning/{binner}/
My custom species clustering does the following steps:
- Pre-cluster genomes with single-linkage at 92.5 ANI.
- Re-calibrate checkm2 results.
- If a minority of genomes from a pre-cluster use a different translation table they are removed
- If some genomes of a pre-cluster don't use the specialed completeness model we re-calibrate completeness to the minimum value. This ensures that not a bad genome evaluated on the general model is preferred over a better genome evaluated on the specific model. See also https://silask.github.io/post/better_genomes/ Section 2.
- Drop genomes that don't correspond to the filter criteria after re-calibration
- Cluster genomes with ANI threshold default 95%
- Select the best genome as representative based on the Quality score Completeness - 5x Contamination
- @jotech made their first contribution in #667
- gtdb08
- Use Gunc
- New Folder organisation: Main output files for Binning are in the new folder
Binning
- Use hdf-format for gene catalogs. Allow efficient storage and selective access to large count and coverage matrices from the genecatalog. (See docs for how to load them) #621
- Semibin v. 1.5 by @SilasK in #622
- Support for checkm2 by @SilasK in #607
Thank you @trickovicmatija for your help.
Full Changelog: https://github.com/metagenome-atlas/atlas/compare/v2.13.1...v2.14.0
- use minimap for contigs, genecatalog and genomes in #569 #577
- filter genomes my self in #568 The filter function is defined in the config file:
genome_filter_criteria: "(Completeness-5*Contamination >50 ) & (Length_scaffolds >=50000) & (Ambigious_bases <1e6) & (N50 > 5*1e3) & (N_scaffolds < 1e3)"
The genome filtering is similar as other publications in the field, e.g. GTDB. What is maybe a bit different is that genomes with completeness around 50% and contamination around 10% are excluded where as using the default parameters dRep would include those.
- use Drep again in #579 We saw better performances using drep. This scales also now to ~1K samples
- Use new Dram version 1.4 by in #564
Full Changelog: https://github.com/metagenome-atlas/atlas/compare/v2.12.0...v2.13.0
- GTDB-tk requires rule
extract_gtdb
to run first by @Waschina in #551 - use Galah instead of Drep
- use bbsplit for mapping to genomes (maybe move to minimap in future)
- faster gene catalogs quantification using minimap.
- Compatible with snakemake v7.15
- @Waschina made their first contribution in #551
Full Changelog: https://github.com/metagenome-atlas/atlas/compare/v2.11.1...v2.12.0
- Make atlas handle large gene catalogs using parquet and pyfastx (Fix #515)
parquet files can be opened in python with
import pandas as pd
coverage = pd.read_parquet("working_dir/Genecatalog/counts/median_coverage.parquet")
coverage.set_index("GeneNr", inplace=True)
and in R it should be something like:
arrow::read_parquet("working_dir/Genecatalog/counts/median_coverage.parquet")
Full Changelog: https://github.com/metagenome-atlas/atlas/compare/v2.10.0...v2.11.0
- GTDB version 207
- Low memory taxonomic annotation
- ✨ Start an atlas project from public data in SRA Docs
- Make atlas ready for python 3.10 #498
- Add strain profiling using inStrain You can run
atlas run genomes strains
- @alienzj made their first contribution to fix config when run DRAM annotate in #495
This is a major update of metagenome-atlas. It was developed for the 3-day course in Finnland, that's also why it has a finish release name.
It integrates bleeding-edge binners Vamb
and SemiBin
that use Co-binning based on co-abundance. Thank you @yanhui09 and @psj1997 for helping with this. The first results show better results using these binners over the default.
The command atlas run genomes
produces genome-level functional annotation and Kegg pathways respective modules. It uses DRAM from @shafferm with a hack to produce all available Kegg modules.
The command atlas run genecatalog
now produces directly the abundance of the different genes. See more in #276
In future this part of the pipeline will include protein assembly to better tackle complicated metagenomes.
See for example the QC report
All tools use in atlas are now up to date. From assebler to GTDB. The one exception is, BBmap which contains a bug and ignores the minidenty parameter.
Atlas init correctly parses fastq files even if they are in subfolders and if paired-ends are named simply Sample_1/Sample_2. @Sofie8 will be happy about this. Atlas log uses nice colors.
The default ANI threshold for genome-dereplication was set to 97.5% to include more sub-species diversity.