Angel
A Flexible and Powerful Parameter Server for large-scale machine learning
Install / Use
/learn @Angel-ML/AngelREADME

Angel is a high-performance distributed machine learning and graph computing platform based on the philosophy of Parameter Server. It is tuned for performance with big data from Tencent and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension model. Angel is jointly developed by Tencent and Peking University, taking account of both high availability in industry and innovation in academia.
With model-centered core design concept, Angel partitions parameters of complex models into multiple parameter-server nodes, and implements a variety of machine learning algorithms and graph algorithms using efficient model-updating interfaces and functions, as well as flexible consistency model for synchronization.
Angel is developed with Java and Scala. It supports running on Yarn. With PS Service abstraction, it supports Spark on Angel. Graph computing and deep learning frameworks support is under development and will be released in the future.
We welcome everyone interested in machine learning or graph computing to contribute code, create issues or pull requests. Please refer to Angel Contribution Guide for more detail.
Introduction to Angel
Design
Quick Start
Deployment
- Compilation Guide
- Running on Local
- Running on Yarn
- Configuration Details
- Resource Configuration Guide
Programming Guide
Algorithm
- Angel or Spark On Angel?
- Algorithm Parameter Description
- Angel
- Traditional Machine Learning Methods
- Spark on Angel
Community
- Mailing list: angel-tsc@lists.deeplearningfoundation.org
- Angel homepage in Linux FD: https://angelml.ai/
- Committers & Contributors
- Contributing to Angel
- Roadmap
FAQ
Papers
- PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm. WWW, 2022
- Graph Attention Multi-Layer Perceptron. KDD, 2022
- Node Dependent Local Smoothing for Scalable Graph Learning. NeurlPS, 2021
- PSGraph: How Tencent trains extremely large-scale graphs with Spark?.ICDE, 2020.
- DimBoost: Boosting Gradient Boosting Decision Tree to Higher Dimensions. SIGMOD, 2018.
- LDA*: A Robust and Large-scale Topic Modeling System. VLDB, 2017
- Heterogeneity-aware Distributed Parameter Servers. SIGMOD, 2017
- Angel: a new large-scale machine learning system. National Science Review (NSR), 2017
- TencentBoost: A Gradient Boosting Tree System with Parameter Server. ICDE, 2017
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
