SkillAgentSearch skills...

Phylign

Alignment against all pre-2019 bacteria on laptops within a few hours (former MOF-Search)

Install / Use

/learn @karel-brinda/Phylign

README

Phylign – alignment to all pre-2019 bacteria (former MOF-Search)

<p> <a href="https://brinda.eu/mof"> <img src="docs/logo_wbg.svg" align="left" style="width:100px;" /> </a> Alignment to <a href="https://doi.org/10.1371/journal.pbio.3001421">all pre-2019 bacteria from ENA</a> on standard desktop and laptops computers. Phylign uses <a href="https://brinda.eu/mof"> phylogenetically compressed assemblies and their <i>k</i>-mer indexes</a> to align batches of queries to them by <a href="https://github.com/lh3/minimap2">Minimap 2</a>, all within only several hours. </p><br/>

Info Paper DOI GitHub release DOI CI Tests

<h2>Contents</h2> <!-- vim-markdown-toc GFM --> <!-- vim-markdown-toc -->

1. Introduction

The central idea behind Phylign, enabling alignment locally at such a large scale, is phylogenetic compression (paper) - a technique based on using estimated evolutionary history to guide compression and search of large genome collections using existing algorithms and data structures.

In short, input data are reorganized according to the topology of the estimated phylogenies, which makes data highly locally compressible even using basic techniques. Existing software packages for compression, indexing, and search

  • in this case XZ, COBS, and Minimap2 - are then used as low-level tools. The resulting performance gains come from a wide range of benefits of phylogenetic compression, including easy parallelization, small memory requirements, small database size, better memory locality, and better branch prediction.

For more information about phylogenetic compression and the implementation details of Phylign, see the corresponding paper (including its supplementary material and visit the associated website.

Citation

K. Břinda, L. Lima, S. Pignotti, N. Quinones-Olvera, K. Salikhov, R. Chikhi, G. Kucherov, Z. Iqbal, and M. Baym. Efficient and Robust Search of Microbial Genomes via Phylogenetic Compression. bioRxiv 2023.04.15.536996, 2023. https://doi.org/10.1101/2023.04.15.536996

2. Requirements

2a) Hardware

Phylign requires a standard desktop or laptop computer with an *nix system, and it can also run on a cluster. The minimal hardware requirements are 12 GB RAM and approximately 120 GB of disk space (102 GB for the database and a margin for intermediate files).

2b) Dependencies

Phylign is implemented as a Snakemake pipeline, using the Conda system to manage non-standard dependencies. Ensure you have Conda installed with the following packages:

  • GNU Time (on Linux present by default; on OS X, install with brew install gnu-time).
  • Python (>=3.7)
  • Snakemake (>=6.2.0)
  • Mamba (>= 0.20.0) - optional, but recommended

Additionally, Phylign uses standard Unix tools like GNU Make, cURL, XZ Utils, and GNU Gzip. These tools are typically included in standard *nix installations. However, in minimal setups (e.g., virtualization, continuous integration), you might need to install them using the corresponding package managers.

3. Installation

3a) Step 1: Install dependencies

Make sure you have Conda and GNU Time installed. On Linux:

sudo apt-get install conda

On OS X (using Homebrew):

brew install conda
brew install gnu-time

Install Python (>=3.7), Snakemake (>=6.2.0), and Mamba (optional but recommended) using Conda:

conda install -y -c bioconda -c conda-forge \
    "python>=3.7" "snakemake>=6.2.0" "mamba>=0.20.0"

3b) Step 2: Clone the repository

Clone the Phylign repository from GitHub and navigate into the directory:

 git clone https://github.com/karel-brinda/phylign
 cd phylign

3c) Step 3: Run a simple test

Run the following command to ensure the pipeline works for sample queries and 3 batches (this will also install all additional dependencies using Conda):

make test

Make sure the test returns 0 (success) and that you see the expected output message:

 Success! Test run produced the expected output.

3d) Step 4: Download the database

Download all phylogenetically compressed assemblies and COBS k-mer indexes for the 661k-HQ collection by:

make download

The downloaded files will be located in the asms/ and cobs/ directories.

Notes:

  • The compressed assemblies comprise all the genomes from the 661k collection.The COBS indexes comprise only those genomes that passed quality control.

4. Usage

4a) Step 1: Copy or symlink your queries

Remove the default test files or your old files in the input/ directory and copy or symlink (recommended) your query files. The supported input formats are FASTA and FASTQ, possibly gzipped. All query files will be preprocessed and merged together.

Notes:

  • All query names have to be unique among all query files.
  • Queries should not contain non-ACGT characters. All non-ACGT characters in your query sequences will be translated to A.

4b) Step 2: Adjust configuration

Edit the config.yaml file for your desired search. All available options are documented directly there.

4c) Step 3: Clean up intermediate files

Run make clean to clean intermediate files from the previous runs. This includes COBS matching files, alignment files, and various reports.

4d) Step 4: Run the pipeline

Simply run make, which will execute Snakemake with the corresponding parameters. If you want to run the pipeline step by step, run make match followed by make map.

4e) Step 5: Analyze your results

Check the output files in output/ (for more info about formats, see 5c) File formats).

If the results do not correspond to what you expected and you need to re-adjust your search parameters, go to Step 2. If only the mapping part is affected by the changes, you proceed more rapidly by manually removing the files in intermediate/05_map and output/ and running directly make map.

5. Additional information

5a) List of workflow commands

Phylign is executed via GNU Make, which handles all parameters and passes them to Snakemake.

Here's a list of all implemented commands (to be executed as make {command}):

####################
# General commands #
####################
   all                Run everything (the default rule)
   test               Quick test using 3 batches
   help               Print help messages
   clean              Clean intermediate search files
   cleanall           Clean all generated and downloaded files
##################
# Pipeline steps #
##################
   conda              Create the conda environments
   download           Download the assemblies and COBS indexes
   download_asms      Download only the assemblies
   download_cobs      Download only the COBS indexes
   match              Match queries using COBS (queries -> candidates)
   map                Map candidates to assemblies (candidates -> alignments)
#############
# Reporting #
#############
   config             Print configuration without comments
   report             Generate Snakemake report
###########
# Cluster #
###########
   cluster_slurm      Submit to a SLURM cluster
   cluster_lsf        Submit to LSF cluster
   cluster_lsf_test   Submit the test pipeline to LSF cluster
##################
# For developers #
############
View on GitHub
GitHub Stars41
CategoryDevelopment
Updated1mo ago
Forks9

Languages

Python

Security Score

80/100

Audited on Feb 13, 2026

No findings