SARS-CoV-2 analysis pipeline for short-read, paired-end sequencing.
A Makefile is part of the code that installs all dependencies using bioconda.
git clone --recursive https://github.com/tobiasrausch/covid19.git
cd covid19
make all
There is a script to download and index the SARS-CoV-2 and GRCh38 reference sequence.
cd ref/ && ./prepareREF.sh
There is another script to prepare the kraken2 human database to filter host reads.
cd kraken2/ && ./prepareDB.sh
There is a run script that performs adapter trimming, host read removal, alignment, variant calling and annotation, consensus calling and some quality control. The last parameter, called unique_sample_id
, is used to create a unique output directory in the current working directory.
./src/run.sh <read.1.fq.gz> <read.2.fq.gz> <unique_sample_id>
The main output files are:
-
The adapter-trimmed and host-filtered FASTQ files:
ls <unique_sample_id>.filtered.R_[12].fq.gz
-
The alignment to SARS-CoV-2:
ls <unique_sample_id>.srt.bam
-
The consensus sequence:
ls <unique_sample_id>.cons.fa
-
The annotated variants:
ls <unique_sample_id>.variants.tsv
-
The assigned lineage:
ls <unique_sample_id>.lineage.csv
-
The summary QC report:
ls <unique_sample_id>.qc.summary
The above pipeline generates a report for every sample. It can be naively parallelized on the sample level. You can then aggregate all the QC information and the lineage & clade assignments using
./src/aggregate.sh outtable */*.qc.summary
You can estimate cross-contamination based on the allelic frequencies of variant calls using
./src/crosscontam.sh contam */*.bcf
This works best on good quality consensus sequences, i.e.:
./src/crosscontam.sh contam
grep "RKI pass" /.qc.summary | sed 's/.qc.summary.*$/.bcf/' | tr '\n' ' '`
The repository contains an example script using a COG-UK data set.
cd example/ && ./expl.sh
Evolution of SARS-CoV-2 in the Rhine-Neckar/Heidelberg Region 01/2021 - 07/2023. Infect Genet Evol. 2024 Feb 23:119:105577. DOI: 10.1016/j.meegid.2024.105577
Many thanks to the open-science of COG-UK, their data sets in ENA were very useful to develop the code. The workflow uses many tools distributed via bioconda, please see the Makefile for all the dependencies and of course, thanks to all the developers.