Sarek
Analysis pipeline to detect germline or somatic variants (pre-processing, variant calling and annotation) from WGS / targeted sequencing
Install / Use
/learn @nf-core/SarekREADME
Introduction
nf-core/sarek is a workflow designed to detect variants on whole genome or targeted sequencing data. Initially designed for Human, and Mouse, it can work on any species with a reference genome. Sarek can also handle tumour / normal pairs and could include additional relapses.
The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website.
It's listed on Elixir - Tools and Data Services Registry and Dockstore.
<p align="center"> <img title="Sarek Workflow" src="docs/images/sarek_workflow.png" width=30%> </p>Pipeline summary
Depending on the options and samples provided, the pipeline can currently perform the following:
- Form consensus reads from UMI sequences (
fgbio) - Sequencing quality control and trimming (enabled by
--trim_fastq) (FastQC,fastp) - Contamination removal (
BBSplit, enabled by--tools bbsplit) - Map Reads to Reference (
BWA-mem,BWA-mem2,dragmaporSentieon BWA-mem) - Process BAM file (
GATK MarkDuplicates,GATK BaseRecalibratorandGATK ApplyBQSRorSentieon LocusCollectorandSentieon Dedup) - Experimental Feature: Use GPU-accelerated parabricks implementation as alternative to "Map Reads to Reference" + "Process BAM file" (
--aligner parabricks) - Summarise alignment statistics (
samtools stats,mosdepth) - Variant calling (enabled by
--tools, see compatibility):ASCATCNVkitControl-FREECDeepVariantfreebayesGATK HaplotypeCallerGATK Mutect2indexcovLofreqMantampileupMSIsensor2MSIsensor-proMuSESentieon HaplotyperStrelkaTIDDIT
- Post-variant calling options, one of:
- Filtering (
bcftools view(default: filter byPASS,.)), normalisation (bcftools norm) and consensus calling (bcftools isec, default: called by at least 2 tools-n+2) on all vcfs and/orbcftools concatfor germline vcfs Varlociraptorfor all vcfs
- Filtering (
- Variant filtering and annotation (
SnpEff,Ensembl VEP,BCFtools annotate,SnpSift) - Summarise and represent QC (
MultiQC)
Usage
[!NOTE] If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with
-profile testbefore running the workflow on actual data.
First, prepare a samplesheet with your input data that looks as follows:
samplesheet.csv:
patient,sample,lane,fastq_1,fastq_2
ID1,S1,L002,ID1_S1_L002_R1_001.fastq.gz,ID1_S1_L002_R2_001.fastq.gz
Each row represents a pair of fastq files (paired end).
Now, you can run the pipeline using:
nextflow run nf-core/sarek \
-profile <docker/singularity/.../institute> \
--input samplesheet.csv \
--outdir <OUTDIR>
[!WARNING] Please provide pipeline parameters via the CLI or Nextflow
-params-fileoption. Custom config files including those provided by the-cNextflow option can be used to provide any configuration except for parameters; see docs.
For more details and further functionality, please refer to the usage documentation and the parameter documentation.
Pipeline output
To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.
Benchmarking
On each release, the pipeline is run on 3 full size tests:
test_fullruns tumor-normal data for one patient from the SEQ2C consortiumtest_full_germlineruns a WGS 30X Genome-in-a-Bottle(NA12878) datasettest_full_germline_ncbench_agilentruns two WES samples with 75M and 200M reads (data available here). The results are uploaded to Zenodo, evaluated against a truth dataset, and results are made available via the NCBench dashboard.
Credits
Sarek was originally written by Maxime U Garcia and Szilveszter Juhos at the National Genomics Infastructure and National Bioinformatics Infastructure Sweden which are both platforms at SciLifeLab, with the support of The Swedish Childhood Tumor Biobank (Barntumörbanken). Friederike Hanssen and Gisela Gabernet at QBiC later joined and helped with further development.
The Nextflow DSL2 conversion of the pipeline was lead by Friederike Hanssen and Maxime U Garcia.
Maintenance is now lead by Friederike Hanssen and Maxime U Garcia (now at Seqera)
Main developers:
We thank the following people for their extensive assistance in the development of this pipeline:
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
last30days-skill
8.5kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
