DistMLIP
DistMLIP: A Distributed Inference Library for Fast, Large Scale Atomistic Simulation
Install / Use
/learn @AegisIK/DistMLIPREADME
DistMLIP: A Distributed Inference Library for Fast, Large Scale Atomistic Simulation
About
DistMLIP is an easy-to-use, efficient library for running graph-parallel, multi-GPU simulations using popular machine learning interatomic potentials (MLIPs).
DistMLIP currently supports zero redundancy multi-GPU inference for MLIPs using graph parallelism. Unlike space partitioning via LAMMPS, there is no redundant calculation being performed.
DistMLIP currently supports the following models:
🚧 This project is under active development
If you see a bug, please raise an issue or notify us. All messages will, at the latest, be responded to within 12 hours.
Getting Started
-
Install PyTorch: https://pytorch.org/get-started/locally/
-
Install DGL here (if using the MatGL models): https://www.dgl.ai/pages/start.html
-
Install DistMLIP from pip
TODO
or from source:
git clone git@github.com:AegisIK/DistMLIP.git
cd DistMLIP
# Only run one of the following installation commands
pip install -e .[matgl] # If you're using CHGNet or TensorNet
pip install -e .[mace] # If you're using MACE
pip install -e .[fairchem] # If you're using UMA
python setup.py build_ext --inplace
Running distributed inference
DistMLIP is a wrapper library designed to inherit from other models in order to provided distributed inference support. As a result, all features of the original package (whether it's MatGL, MACE, or UMA) will still work. View one of our example notebooks here to get started.
Although it is supported via DistMLIP, it is recommended to finetune your model using the original model library before loading your model into DistMLIP via
from_existingand running distributed inference.
Currently only single node inference is supported. Multi-machine inference is future work.
Roadmap
- [x] Distributing CHGNet
- [x] Distributing TensorNet
- [X] Distributing MACE
- [X] Distributing UMA
- [ ] Multi-machine inference
- [ ] More works coming soon!
Citation
If you use DistMLIP in your research, please cite our paper:
<pre>@misc{han2025distmlipdistributedinferenceplatform, title={DistMLIP: A Distributed Inference Platform for Machine Learning Interatomic Potentials}, author={Kevin Han and Bowen Deng and Amir Barati Farimani and Gerbrand Ceder}, year={2025}, eprint={2506.02023}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2506.02023}, }</pre>Parallelizing a New Model
If you would like to contribute or want us to parallelize your model, please either raise an issue or email kevinhan@cmu.edu.
Contact Us
- If you have any questions, feel free to raise an issue on this repo.
- If you have any feature requests, please raise an issue on this repo.
- For collaborations and partnerships, please email kevinhan@cmu.edu.
- All requests/issues/inquiries will receive a response within 6-12 hours.
Related Skills
node-connect
344.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
99.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
344.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
344.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
