BaseDmux
snakemake workflow for basecalling and demultiplexing of ONT sequencing data
Install / Use
/learn @vibaotram/BaseDmuxREADME
BASEcalling and DeMUltipleXing for ONT sequencing data
A Snakemake workflow for basecalling and gathering ONT reads originating from disparate runs and barcodes
Basecalling by GUPPY + Demultiplexing by GUPPY and/or DEEPBINNER + MinIONQC/Multiqc + QC reports + reads aggregation into bins + fastq reads trimming + filtering
<p align="center"> <img src="./dag/full_dag.svg" width="500" height="500"> </p>Requirements
- singularity >= 2.5
- conda >=4.3 + Mamba
Implemented tools
- Snakemake
- Guppy
- Deepbinner
- MinIONQC
- Multiqc
- Porechop
- Filtlong
We try to update the tools regularly. See versions in the folders containning conda environment and singularity container recipie files.
More details about individual snakemake Rules
-
Guppy basecalling
Runguppy_basecallerwith filtering reads, then subset fast5 reads from passed reads list (passed_sequencing_summary.txt). -
Guppy demultiplexing
Runguppy_barcoderwith passed fastq, then subset fastq to classified barcode folders based onbarcoding_summary.txt. -
Multi to single fast5
Convert passed multi-read fast5 files to single-read fast5 file, preparing for deepbinner. -
Deepbinner classification
Rundeepbinner classifywith pass single-read fast5, output classification file. -
Deepbinner bin
Classify passed fastq based on classification file, then subset fastq to barcode folders. -
Get sequencing summary per barcode
Subsetpassed_sequencing_summary.txtaccording to barcode ids, preparing for minionqc/multiqc of each barcode and subseting fast5 reads per barcode (get multi fast5 per barcode). -
MinIONQC and Multiqc
After basecalling, MinIONQC is performed for each run, and Multiqc reports all run collectively. On the other hand, after demultiplexing, MinIONQC runs for each barcode separately then Multiqc aggregates MinIONQC results of all barcodes. -
Demultiplex report (optional)
Compare demultiplexing results from different runs, and from different demultiplexers (guppy and/or deepbinner) by analyzing information ofmultiqc_minionqc.txt. It is only available when demultiplexing rules are executed. -
Get reads per genome (optional)
Combine and concatenate fast5 and fastq barcodes for genomes individually based on the demultiplexer program, preparing for further genome assembly , following the information in thebarcodeByGenome_sample.tsvtabulated file (column names of this table should not be modified).
Caution: if guppy or deepbinner is on Demultiplexer of the barcodeByGenome table, it will be executed even it is not specified in config['DEMULTIPLEXER']. -
Porechop (optional)
Find and remove adapters from reads. See here for more information. -
Filtlong (optional)
Filter reads by length and by quality. More details is here. Several filtlong runs at the same time are enabled.
Singularity containers
Workflow jobs run inside Singularity images (see our Singularity Recipe files).
The latest containers will be automatically downloaded and intalled in the baseDmux environement installation directory. They can anyhow be manually downloaded from IRD Drive.
Custom Singularity images can be specified by editing the ./baseDmux/data/singularity.yaml file right after clonning the github repository or directly in your baseDmux installation (see below) location.
Conda environments
Inside of the Singularity images, individual Snakemake rules use dedicated conda environments yaml files that are located there.
- minionqc
- multiqc
- rmarkdown
- porechop
- filtlong
Installation
Download the package:
git clone https://github.com/vibaotram/baseDmux.git
cd ./baseDmux
And then, install in a virtualenv...
make install
source venv/bin/activate
... or install in a conda environment
conda env create -n baseDmux -f environment.yaml
conda activate baseDmux
pip install .
It is recommended to first run the local test below with the toy dataset to make sure everything works well. On the first invokation, this will download and install the Singularity images and setup the Conda environment. This process takes time, so be patient. Note also that in the end, this setup amounts to a total of about 12GB of files , so you need some room on the installation disk.
Usage
Run baseDmux version 1.1.0 ... See https://github.com/vibaotram/baseDmux/blob/master/README.md for more
details
positional arguments:
{configure,run,dryrun,version_tools}
configure edit config file and profile
run run baseDmux
dryrun dryrun baseDmux
version_tools check version for the tools of baseDmux
options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
baseDmux configuration
Because configuring snakemake workflows can be a bit intimidating, we try to clarify below the main principles of baseDmux configuration.
This is done primarilly by adjusting the parameters listed in the workflow config file profile/workflow_parameters .yaml (generated by baseDmux configure - see below). It enables to setup input reads, output folder, parameters for the tools
, reports generation, etc...
This actually corresponds to the typical Snakemake 'config.yaml' file. You can take a look at what serves as a template to create profile/workflow_parameters.yaml. It is suggested to refer to the comments in this file for further details on individual parameters.
baseDmux takes as input a folder with internal ONT 'run' folders that each contains a 'fast5' folder. This is the typical file hierarchy when sequencing with a MinION. baseDmux can therefore process a virtually unlimited number of (multiplexed) sequencing runs.
You can decide whether guppy and deepbinner should run on GPU or CPU by specifying 'RESOURCE' in the config.yaml file depending on the available computing hardware. Note that Deepbinner is not longer maintained and that Deepbinner models are limited to specific 'earlier' flow cells and barcoding kits. One should therefore ensure that that Deepbinner is a adequate for the data at hand.
A typical usage case for baseDmux is to prepare filtered sequencing reads in individual fastq files for genome assembly (or transcripts analysis) when users have a number of genomic DNA (or RNA) preparations sequenced with the same library preparation protocol and flowcell typoe but over several runs with various sets of multiplex barcodes . For this, it is necessary to run the complete workflow. Note that they however currently need to share, if not identical, at least 'compatible' (in the guppy sense), library construction kits and flow cell types.
Users need to prepare a Barcode by genome file. This is a roadmap
table for subseting fastq and fast5 reads, demultiplexed with guppy and/or deepbinner, and coming from disparate
runs and barcodes, in bins corresponding to individual 'genomes' (or samples). It must contain at least the
follwing columns: Demultiplexer, Run_ID, ONT_Barcode, Genome_ID. Values in the Genome_ID correspond to the
labels of the bin into which reads will eventually be grouped. Make sure that these labels do NOT contain
spaces " " or other special characters like '|' '$' ':'. As separators, the safest options are to use "_" or "-".
Likewise, Run_ID values should not contain special characters. In addition, these values must match the names of the
top folders in the input fast5 directory.
Importantly, the Barcode by genome file does not only enable to group reads, it is necessary to provide such a file
for the porechop and filtlong rules to be executed. A template is provided (see the section below on configuration).
Appart from the workflow parameters, there are also additional parameter files that are required to specify Snakemake invocation arguments and, when baseDmux is run with a HPC scheduler, parameters regarding how specific jobs need to be submited. All these configuration files are gathered inside a "profile" directory that can be automatically prototyped with the commands below. This is in line with the recommended way for Snakemake pipelines configuration using profiles.
Generating template configuration files
To simplify configuration, the baseDmux configure command generates a 'template' configuration profile for general
use cases. These files can subsequently be modified to fit specific situations.
usage: baseDmux configure [-h] --mode {local,slurm,cluster,iTrop} [--barcodes_by_genome]
[--edit [EDITOR]]
dir
positional arguments:
dir path to the folder to contain config file and profile you want to create
options:
-h, --help show this help message and exit
--mode {local,slurm,cluster,iTrop}
choose the mode of running Snakemake, local mode or cluster mode
--barcodes_by_genome optional, create a tabular file containing information of barcodes for each
genome)
--edit [EDITOR] optional, open files with editor (nano, vim, gedit, etc.)
These files will be created:
| dir
-| profile
-| config.yaml
-| workflow_parameter.yaml
-| barcodesByGenome.tsv (if --barcodes_by_genome)
Related Skills
node-connect
342.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
342.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.7kCommit, push, and open a PR
