Alphafold
Open source code for AlphaFold 2.
Install / Use
/learn @google-deepmind/AlphafoldREADME

AlphaFold
This package provides an implementation of the inference pipeline of AlphaFold v2. For simplicity, we refer to this model as AlphaFold throughout the rest of this document.
We also provide:
- An implementation of AlphaFold-Multimer. This represents a work in progress and AlphaFold-Multimer isn't expected to be as stable as our monomer AlphaFold system. Read the guide for how to upgrade and update code.
- The technical note containing the models and inference procedure for an updated AlphaFold v2.3.0.
- A CASP15 baseline set of predictions along with documentation of any manual interventions performed.
Any publication that discloses findings arising from using this source code or the model parameters should cite the AlphaFold paper and, if applicable, the AlphaFold-Multimer paper.
Please also refer to the Supplementary Information for a detailed description of the method.
**You can use a slightly simplified version of AlphaFold with community-supported versions (see below).
If you have any questions, please contact the AlphaFold team at alphafold@deepmind.com.

Installation and running your first prediction
You will need a machine running Linux, AlphaFold does not support other operating systems. Full installation requires up to 3 TB of disk space to keep genetic databases (SSD storage is recommended) and a modern NVIDIA GPU (GPUs with more memory can predict larger protein structures).
Please follow these steps:
-
Install Docker.
- Install NVIDIA Container Toolkit for GPU support.
- Setup running Docker as a non-root user.
-
Clone this repository and
cdinto it.git clone https://github.com/deepmind/alphafold.git cd ./alphafold -
Download genetic databases and model parameters:
-
Install
aria2c. On most Linux distributions it is available via the package manager as thearia2package (on Debian-based distributions this can be installed by runningsudo apt install aria2). Same forrsync. -
Please use the script
scripts/download_all_data.shto download and set up full databases. This may take substantial time (download size is 556 GB), so we recommend running this script in the background:
scripts/download_all_data.sh <DOWNLOAD_DIR> > download.log 2> download_all.log &-
Note: The download directory
<DOWNLOAD_DIR>should not be a subdirectory in the AlphaFold repository directory. If it is, the Docker build will be slow as the large databases will be copied into the docker build context. -
It is possible to run AlphaFold with reduced databases; please refer to the complete documentation.
-
-
Check that AlphaFold will be able to use a GPU by running:
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smiThe output of this command should show a list of your GPUs. If it doesn't, check if you followed all steps correctly when setting up the NVIDIA Container Toolkit or take a look at the following NVIDIA Docker issue.
If you wish to run AlphaFold using Singularity (a common containerization platform on HPC systems) we recommend using some of the third party Singularity setups as linked in https://github.com/deepmind/alphafold/issues/10 or https://github.com/deepmind/alphafold/issues/24.
-
Build the Docker image:
docker build -f docker/Dockerfile -t alphafold .If you encounter the following error:
W: GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease' is not signed.use the workaround described in https://github.com/deepmind/alphafold/issues/463#issuecomment-1124881779.
-
Install the
run_docker.pydependencies. Note: You may optionally wish to create a Python Virtual Environment to prevent conflicts with your system's Python environment.pip3 install -r docker/requirements.txt -
Make sure that the output directory exists (the default is
/tmp/alphafold) and that you have sufficient permissions to write into it. -
Run
run_docker.pypointing to a FASTA file containing the protein sequence(s) for which you wish to predict the structure (--fasta_pathsparameter). AlphaFold will search for the available templates before the date specified by the--max_template_dateparameter; this could be used to avoid certain templates during modeling.--data_diris the directory with downloaded genetic databases and--output_diris the absolute path to the output directory.python3 docker/run_docker.py \ --fasta_paths=your_protein.fasta \ --max_template_date=2022-01-01 \ --data_dir=$DOWNLOAD_DIR \ --output_dir=/home/user/absolute_path_to_the_output_dir -
Once the run is over, the output directory shall contain predicted structures of the target protein. Please check the documentation below for additional options and troubleshooting tips.
Genetic databases
This step requires aria2c to be installed on your machine.
AlphaFold needs multiple genetic (sequence) databases to run:
- BFD,
- MGnify,
- PDB70,
- PDB (structures in the mmCIF format),
- PDB seqres – only for AlphaFold-Multimer,
- UniRef30 (FKA UniClust30),
- UniProt – only for AlphaFold-Multimer,
- UniRef90.
We provide a script scripts/download_all_data.sh that can be used to download
and set up all of these databases:
-
Recommended default:
scripts/download_all_data.sh <DOWNLOAD_DIR>will download the full databases.
-
With
reduced_dbsparameter:scripts/download_all_data.sh <DOWNLOAD_DIR> reduced_dbswill download a reduced version of the databases to be used with the
reduced_dbsdatabase preset. This shall be used with the corresponding AlphaFold parameter--db_preset=reduced_dbslater during the AlphaFold run (please see AlphaFold parameters section).
:ledger: Note: The download directory <DOWNLOAD_DIR> should not be a
subdirectory in the AlphaFold repository directory. If it is, the Docker build
will be slow as the large databases will be copied during the image creation.
We don't provide exactly the database versions used in CASP14 – see the note on reproducibility. Some of the databases are mirrored for speed, see mirrored databases.
:ledger: Note: The total download size for the full databases is around 556 GB and the total size when unzipped is 2.62 TB. Please make sure you have a large enough hard drive space, bandwidth and time to download. We recommend using an SSD for better genetic search performance.
:ledger: Note: If the download directory and datasets don't have full read and
write permissions, it can cause errors with the MSA tools, with opaque
(external) error messages. Please ensure the required permissions are applied,
e.g. with the sudo chmod 755 --recursive "$DOWNLOAD_DIR" command.
The download_all_data.sh script will also download the model parameter files.
Once the script has finished, you should have the following directory structure:
$DOWNLOAD_DIR/ # Total: ~ 2.62 TB (download: 556 GB)
bfd/ # ~ 1.8 TB (download: 271.6 GB)
# 6 files.
mgnify/ # ~ 120 GB (download: 67 GB)
mgy_clusters_2022_05.fa
params/ # ~ 5.3 GB (download: 5.3 GB)
# 5 CASP14 models,
# 5 pTM models,
# 5 AlphaFold-Multimer models,
# LICENSE,
# = 16 files.
pdb70/ # ~ 56 GB (download: 19.5 GB)
# 9 files.
pdb_mmcif/ # ~ 238 GB (download: 43 GB)
mmcif_files/
# About 199,000 .cif files.
obsolete.dat
pdb_seqres/ # ~ 0.2 GB (download: 0.2 GB)
pdb_seqres.txt
small_bfd/ # ~ 17 GB (download: 9.6 GB)
bfd-first_non_consensus_sequences.fasta
uniref30/ # ~ 206 GB (download: 52.5 GB)
# 7 files.
uniprot/ # ~ 105 GB (download: 53 GB)
uniprot.fasta
un
