SkillAgentSearch skills...

LibMTL

A PyTorch Library for Multi-Task Learning

Install / Use

/learn @median-research-group/LibMTL

README

LibMTL

Documentation Status License: MIT PyPI version Supported Python versions CodeFactor paper coverage Hits Made With Love

LibMTL is an open-source library built on PyTorch for Multi-Task Learning (MTL). See the latest documentation for detailed introductions and API instructions.

:star: Star us on GitHub — it motivates us a lot!

:bangbang: A comprehensive survey on Gradient-based Multi-Objective Deep Learning is now available on arXiv, along with an awesome list. Check it out!

News

  • [Apr 21 2025] Added support for UPGrad.
  • [Feb 18 2025] Added support for a bilevel method Auto-Lambda (TMLR 2022).
  • [Feb 17 2025] Added support for FAMO (NeurIPS 2023), SDMGrad (NeurIPS 2023), and MoDo (NeurIPS 2023; JMLR 2024).
  • [Feb 06 2025] Added support for two bilevel methods: MOML (NeurIPS 2021; AIJ 2024), FORUM (ECAI 2024).
  • [Sep 19 2024] Added support for FairGrad (ICML 2024).
  • [Aug 31 2024] Added support for ExcessMTL (ICML 2024).
  • [Jul 24 2024] Added support for STCH (ICML 2024).
  • [Feb 08 2024] Added support for DB-MTL.
  • [Aug 16 2023]: Added support for MoCo (ICLR 2023). Many thanks to the author's help @heshandevaka.
  • [Jul 11 2023] Paper got accepted to JMLR.
  • [Jun 19 2023] Added support for Aligned-MTL (CVPR 2023).
  • [Mar 10 2023]: Added QM9 and PAWS-X examples.
  • [Jul 22 2022]: Added support for Nash-MTL (ICML 2022).
  • [Jul 21 2022]: Added support for Learning to Branch (ICML 2020). Many thanks to @yuezhixiong (#14).
  • [Mar 29 2022]: Paper is now available on the arXiv.

Table of Content

Features

  • Unified: LibMTL provides a unified code base to implement and a consistent evaluation procedure including data processing, metric objectives, and hyper-parameters on several representative MTL benchmark datasets, which allows quantitative, fair, and consistent comparisons between different MTL algorithms.
  • Comprehensive: LibMTL supports many state-of-the-art MTL methods including 8 architectures and 16 optimization strategies. Meanwhile, LibMTL provides a fair comparison of several benchmark datasets covering different fields.
  • Extensible: LibMTL follows the modular design principles, which allows users to flexibly and conveniently add customized components or make personalized modifications. Therefore, users can easily and fast develop novel optimization strategies and architectures or apply the existing MTL algorithms to new application scenarios with the support of LibMTL.

Overall Framework

framework

Each module is introduced in Docs.

Supported Algorithms

LibMTL currently supports the following algorithms:

| Optimization Strategies | Venues | Arguments | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | --------------------------- | | Equal Weighting (EW) | - | --weighting EW | | Gradient Normalization (GradNorm) | ICML 2018 | --weighting GradNorm | | Uncertainty Weights (UW) | CVPR 2018 | --weighting UW | | MGDA (official code) | NeurIPS 2018 | --weighting MGDA | | Dynamic Weight Average (DWA) (official code) | CVPR 2019 | --weighting DWA | | Geometric Loss Strategy (GLS) | CVPR 2019 Workshop | --weighting GLS | | Projecting Conflicting Gradient (PCGrad) | NeurIPS 2020 | --weighting PCGrad | | Gradient sign Dropout (GradDrop) | NeurIPS 2020 | --weighting GradDrop | | Impartial Multi-Task Learning (IMTL) | ICLR 2021 | --weighting IMTL | | Gradient Vaccine (GradVac) | ICLR 2021 | --weighting GradVac | | Conflict-Averse Gradient descent (CAGrad) (official code) | NeurIPS 2021 | --weighting CAGrad | | MOML | NeurIPS 2021 | --weighting MOML | | Nash-MTL (official code) | ICML 2022 | --weighting Nash_MTL | | Random Loss Weighting (RLW)

Related Skills

View on GitHub
GitHub Stars2.5k
CategoryEducation
Updated3d ago
Forks232

Languages

Python

Security Score

100/100

Audited on Mar 21, 2026

No findings