SkillAgentSearch skills...

TorchJD

Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning).

Install / Use

/learn @SimplexLab/TorchJD

README

<picture> <source media="(prefers-color-scheme: dark)" srcset="docs/source/_static/logo-dark-mode.png"> <source media="(prefers-color-scheme: light)" srcset="docs/source/_static/logo-light-mode.png"> <img alt="Fallback image description" src="docs/source/_static/logo-light-mode.png"> </picture>

Doc Static Badge Tests codecov pre-commit.ci status PyPI - Python Version Static Badge

TorchJD is a library extending autograd to enable Jacobian descent with PyTorch. It can be used to train neural networks with multiple objectives. In particular, it supports multi-task learning, with a wide variety of aggregators from the literature. It also enables the instance-wise risk minimization paradigm. The full documentation is available at torchjd.org, with several usage examples.

Jacobian descent (JD)

Jacobian descent is an extension of gradient descent supporting the optimization of vector-valued functions. This algorithm can be used to train neural networks with multiple loss functions. In this context, JD iteratively updates the parameters of the model using the Jacobian matrix of the vector of losses (the matrix stacking each individual loss' gradient). For more details, please refer to Section 2.1 of the paper.

How does this compare to averaging the different losses and using gradient descent?

Averaging the losses and computing the gradient of the mean is mathematically equivalent to computing the Jacobian and averaging its rows. However, this approach has limitations. If two gradients are conflicting (they have a negative inner product), simply averaging them can result in an update vector that is conflicting with one of the two gradients. Averaging the losses and making a step of gradient descent can thus lead to an increase of one of the losses.

This is illustrated in the following picture, in which the two objectives' gradients $g_1$ and $g_2$ are conflicting, and averaging them gives an update direction that is detrimental to the first objective. Note that in this picture, the dual cone, represented in green, is the set of vectors that have a non-negative inner product with both $g_1$ and $g_2$.

image

With Jacobian descent, $g_1$ and $g_2$ are computed individually and carefully aggregated using an aggregator $\mathcal A$. In this example, the aggregator is the Unconflicting Projection of Gradients $\mathcal A_{\text{UPGrad}}$: it projects each gradient onto the dual cone, and averages the projections. This ensures that the update will always be beneficial to each individual objective (given a sufficiently small step size). In addition to $\mathcal A_{\text{UPGrad}}$, TorchJD supports more than 10 aggregators from the literature.

Installation

<!-- start installation -->

TorchJD can be installed directly with pip:

pip install torchjd
<!-- end installation -->

Some aggregators may have additional dependencies. Please refer to the installation documentation for them.

Usage

Compared to standard torch, torchjd simply changes the way to obtain the .grad fields of your model parameters.

Using the autojac engine

The autojac engine is for computing and aggregating Jacobians efficiently.

1. backward + jac_to_grad

In standard torch, you generally combine your losses into a single scalar loss, and call loss.backward() to compute the gradient of the loss with respect to each model parameter and to store it in the .grad fields of those parameters. The basic usage of torchjd is to replace this loss.backward() by a call to torchjd.autojac.backward(losses). Instead of computing the gradient of a scalar loss, it will compute the Jacobian of a vector of losses, and store it in the .jac fields of the model parameters. You then have to call torchjd.autojac.jac_to_grad to aggregate this Jacobian using the specified Aggregator, and to store the result into the .grad fields of the model parameters. See this usage example for more details.

2. mtl_backward + jac_to_grad

In the case of multi-task learning, an alternative to torchjd.autojac.backward is torchjd.autojac.mtl_backward. It computes the gradient of each task-specific loss with respect to the corresponding task's parameters, and stores it in their .grad fields. It also computes the Jacobian of the vector of losses with respect to the shared parameters and stores it in their .jac field. Then, the torchjd.autojac.jac_to_grad function can be called to aggregate this Jacobian and replace the .jac fields by .grad fields for the shared parameters.

The following example shows how to use TorchJD to train a multi-task model with Jacobian descent, using UPGrad.

  import torch
  from torch.nn import Linear, MSELoss, ReLU, Sequential
  from torch.optim import SGD

+ from torchjd.autojac import jac_to_grad, mtl_backward
+ from torchjd.aggregation import UPGrad

  shared_module = Sequential(Linear(10, 5), ReLU(), Linear(5, 3), ReLU())
  task1_module = Linear(3, 1)
  task2_module = Linear(3, 1)
  params = [
      *shared_module.parameters(),
      *task1_module.parameters(),
      *task2_module.parameters(),

Related Skills

View on GitHub
GitHub Stars315
CategoryEducation
Updated2d ago
Forks15

Languages

Python

Security Score

100/100

Audited on Apr 8, 2026

No findings