LibMTL
A PyTorch Library for Multi-Task Learning
Install / Use
/learn @median-research-group/LibMTLREADME
LibMTL
LibMTL is an open-source library built on PyTorch for Multi-Task Learning (MTL). See the latest documentation for detailed introductions and API instructions.
:star: Star us on GitHub — it motivates us a lot!
:bangbang: A comprehensive survey on Gradient-based Multi-Objective Deep Learning is now available on arXiv, along with an awesome list. Check it out!
News
- [Apr 21 2025] Added support for UPGrad.
- [Feb 18 2025] Added support for a bilevel method Auto-Lambda (TMLR 2022).
- [Feb 17 2025] Added support for FAMO (NeurIPS 2023), SDMGrad (NeurIPS 2023), and MoDo (NeurIPS 2023; JMLR 2024).
- [Feb 06 2025] Added support for two bilevel methods: MOML (NeurIPS 2021; AIJ 2024), FORUM (ECAI 2024).
- [Sep 19 2024] Added support for FairGrad (ICML 2024).
- [Aug 31 2024] Added support for ExcessMTL (ICML 2024).
- [Jul 24 2024] Added support for STCH (ICML 2024).
- [Feb 08 2024] Added support for DB-MTL.
- [Aug 16 2023]: Added support for MoCo (ICLR 2023). Many thanks to the author's help @heshandevaka.
- [Jul 11 2023] Paper got accepted to JMLR.
- [Jun 19 2023] Added support for Aligned-MTL (CVPR 2023).
- [Mar 10 2023]: Added QM9 and PAWS-X examples.
- [Jul 22 2022]: Added support for Nash-MTL (ICML 2022).
- [Jul 21 2022]: Added support for Learning to Branch (ICML 2020). Many thanks to @yuezhixiong (#14).
- [Mar 29 2022]: Paper is now available on the arXiv.
Table of Content
- Features
- Overall Framework
- Supported Algorithms
- Supported Benchmark Datasets
- Installation
- Quick Start
- Citation
- Contributor
- Contact Us
- Acknowledgements
- License
Features
- Unified:
LibMTLprovides a unified code base to implement and a consistent evaluation procedure including data processing, metric objectives, and hyper-parameters on several representative MTL benchmark datasets, which allows quantitative, fair, and consistent comparisons between different MTL algorithms. - Comprehensive:
LibMTLsupports many state-of-the-art MTL methods including 8 architectures and 16 optimization strategies. Meanwhile,LibMTLprovides a fair comparison of several benchmark datasets covering different fields. - Extensible:
LibMTLfollows the modular design principles, which allows users to flexibly and conveniently add customized components or make personalized modifications. Therefore, users can easily and fast develop novel optimization strategies and architectures or apply the existing MTL algorithms to new application scenarios with the support ofLibMTL.
Overall Framework

Each module is introduced in Docs.
Supported Algorithms
LibMTL currently supports the following algorithms:
| Optimization Strategies | Venues | Arguments |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | --------------------------- |
| Equal Weighting (EW) | - | --weighting EW |
| Gradient Normalization (GradNorm) | ICML 2018 | --weighting GradNorm |
| Uncertainty Weights (UW) | CVPR 2018 | --weighting UW |
| MGDA (official code) | NeurIPS 2018 | --weighting MGDA |
| Dynamic Weight Average (DWA) (official code) | CVPR 2019 | --weighting DWA |
| Geometric Loss Strategy (GLS) | CVPR 2019 Workshop | --weighting GLS |
| Projecting Conflicting Gradient (PCGrad) | NeurIPS 2020 | --weighting PCGrad |
| Gradient sign Dropout (GradDrop) | NeurIPS 2020 | --weighting GradDrop |
| Impartial Multi-Task Learning (IMTL) | ICLR 2021 | --weighting IMTL |
| Gradient Vaccine (GradVac) | ICLR 2021 | --weighting GradVac |
| Conflict-Averse Gradient descent (CAGrad) (official code) | NeurIPS 2021 | --weighting CAGrad |
| MOML | NeurIPS 2021 | --weighting MOML |
| Nash-MTL (official code) | ICML 2022 | --weighting Nash_MTL |
| Random Loss Weighting (RLW)
Related Skills
claude-opus-4-5-migration
82.1kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
334.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
mcp-for-beginners
15.6kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
TrendRadar
49.7k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
