BInD
Official implementation of "BInD: Bond and Interaction-Generating Diffusion Model for Multi-Objective Structure-Based Drug Design" (Advanced Science)
Install / Use
/learn @lee-jwon/BInDREADME
BInD
This repository is the official repository for BInD (Bond and Interaction generating Diffusion model)
<p align="center"> <img src="assets/overview.png" width=1000" height="auto" /> </p>Publication ✨
Setup
Installation of Python Packages
conda create -n bindenv python=3.9 -y
conda activate bindenv
# ML
conda install scipy=1.11.3 numpy=1.26.0 pandas=2.1.1 scikit-learn=1.3.0 -y
conda install pytorch==1.11.0 cudatoolkit=11.3 -c pytorch -y
pip install torch-scatter==2.0.9 torch-sparse==0.6.15 torch-cluster==1.6.0 torch-geometric==2.1.0.post1 -f https://data.pyg.org/whl/torch-1.11.0+cu113.html
pip install tensorboard==2.15.1
# cheminformatics
pip install rdkit==2023.9.2
pip install biopython==1.81
conda install plip=2.3.0 -c conda-forge
conda install -c conda-forge openbabel==3.1.1
pip install meeko==0.1.dev3 scipy pdb2pqr vina==1.2.2
python -m pip install git+https://github.com/Valdes-Tresanco-MS/AutoDockTools_py3
git clone https://github.com/durrantlab/POVME
# posecheck
pip install prolif==2.0.3
git clone https://github.com/cch1999/posecheck.git
cd posecheck
pip install -e .
# utils
pip install pyyaml==6.0.1
pip install easydict==1.13
pip install parmap==1.7.0
# plots
pip install matplotlib==3.8.1
pip install seaborn==0.13.0
Download Data and Trained Checkpoints
| Data | Size | Path |
| :- | -: | :- |
| Raw data | 1.7GB | data/raw/ |
| Processed data (whole) | 3.7GB | data/processed/ |
| Processed data (only test) | 3.3MB | data/processed/ |
| Data split keys | 3.3MB | data/ |
| POVME data | 0.7MB | data/ |
| Trained checkpoint | 10.7MB | save/ |
You can download the .tar.gz files provided above, extract them, and place the contents in the path.
Training BInD From Scratch
Data Preparation
Warning: Using --recreate parameter will overwrite the existing directory where training checkpoints are saved.
python process.py --recreate --save_dirn ./data/processed/my_data/ --raw_dirn ./data/raw/crossdocked_pocket10
Training
To train BInD with the default settings, use the command below.
You can adjust the training configurations by editing the configs/train.yaml file.
For multi-GPU training, adjust the n_gpu and num_workers parameters as needed.
Additionally, setting the pre_load_dataset option to yes will load the dataset into memory in advance, reducing file I/O load.
Warning: Setting the save_dirn parameter will overwrite the existing directory where training checkpoints are saved.
python train.py configs/train.yaml
Generating Molecules with BInD
Molecule Generation for Test Pockets
python generate_test_pockets.py configs/generate_test_pockets.yaml
Pocket Conditioned Molecule Generation
python generate_single_pocket.py configs/generate_single_pocket.yaml
Citation
@article{lee2025bind,
title={BInD: Bond and Interaction-Generating Diffusion Model for Multi-Objective Structure-Based Drug Design},
author={Lee, Joongwon and Zhung, Wonho and Seo, Jisu and Kim, Woo Youn},
journal={Advanced Science},
pages={e02702},
year={2025},
publisher={Wiley Online Library},
doi = {https://doi.org/10.1002/advs.202502702},
}
Collaborators
<table> <tr> <td align="center" style="border: none;"> <a href="https://github.com/lee-jwon"> <img src="https://github.com/lee-jwon.png?size=600" width="100" height="100"> <br /> Lee, Joongwon </a> </td> <td align="center" style="border: none;"> <a href="https://github.com/WonhoZhung"> <img src="https://github.com/WonhoZhung.png?size=600" width="100" height="100"> <br /> Zhung, Wonho </a> </td> <td align="center" style="border: none;"> <a href="https://github.com/SeoJisu0305"> <img src="https://github.com/SeoJisu0305.png?size=600" width="100" height="100"> <br /> Seo, Jisu </a> </td> </tr> </table>Related Skills
diffs
337.3kUse the diffs tool to produce real, shareable diffs (viewer URL, file artifact, or both) instead of manual edit summaries.
clearshot
Structured screenshot analysis for UI implementation and critique. Analyzes every UI screenshot with a 5×5 spatial grid, full element inventory, and design system extraction — facts and taste together, every time. Escalates to full implementation blueprint when building. Trigger on any digital interface image file (png, jpg, gif, webp — websites, apps, dashboards, mockups, wireframes) or commands like 'analyse this screenshot,' 'rebuild this,' 'match this design,' 'clone this.' Skip for non-UI images (photos, memes, charts) unless the user explicitly wants to build a UI from them. Does NOT trigger on HTML source code, CSS, SVGs, or any code pasted as text.
openpencil
1.8kThe world's first open-source AI-native vector design tool and the first to feature concurrent Agent Teams. Design-as-Code. Turn prompts into UI directly on the live canvas. A modern alternative to Pencil.
HappyColorBlend
HappyColorBlendVibe Project Guidelines Project Overview HappyColorBlendVibe is a Figma plugin for color palette generation with advanced tint/shade blending capabilities. It allows designers to
