Filternet
Python learning-aided filters library. Implements Kalman filter, Extended Kalman filter, KalmanNet, Split-KalmanNet and more.
Install / Use
/learn @SongJgit/FilternetREADME
Notice
🎉🎉🎉Excited to share that our paper "LAKALMANTRACKER: ROBUST LEARNING-AIDED KALMAN FILTERING FOR MULTI-OBJECT TRACKING" has been accepted to ICASSP 2026! 🎉🎉🎉
We will upload all the code once the paper submitted to "Information Fusion" has been accepted.
You can view papers related to Learning-Aided Filtering through the following links.
🥳 What's New
- Feb. 2025: 🌟🌟🌟🌟🌟 Add NCLT Fusion task benchmark (with WandB logger), Lorenz Attractor benchmark (with WandB logger).
- Feb. 2025: 🌟🌟🌟 First commit. Added support for model-based Kalman filter, Extended Kalman filter, Interacting Multiple model, and learning-aided Kalman filtering KalmanNet, Split-KalmanNet, DANSE.
Introduction
This library provides Learning-Aided/Data-Driven Kalman filtering and related optimal and non-optimal filtering software in Python. It contains Kalman filters, Extended Kalman filters, KalmanNet, Split-KalmanNet, and Ours Semantic-Independent KalmanNet(submitted to Information Fusion, waiting review). This library is implemented with Pytorch-Lightning, MMEngine, and WandB.
Highlights
Learning-Aided Kalman Filtering
-
Unified data structure
Now that Learning-Aided Kalman Filtering paper implementations have their own characteristics, which makes comparing algorithms very difficult, we use a unified data structure for the supported algorithms here, so that the user only needs to change the Datasets to seamlessly compare the algorithms.
-
Multiple tasks supported
Facilitates users to compare the performance of your own algorithms on different tasks, such as Lorenz Attractor, NCLT Fusion task, NCLT Estimation, and Motion Estimation, etc.
-
Easy to develop your own models
Many basic modules have been implemented, e.g. CV, CA modeling, etc., which can be easily extended to your own models.
-
Support for multiple GPUs and Batches
The code supports multi-GPU as well as mini-batch training (not supported by earlier versions of many papers, e.g. KalmanNet and DANSE).
Advanced Features
-
We use Pytorch-Lightning to simplify the training process. It provides a rich API that saves a lot of time in writing engineering code, such as DDP, logger, loop, etc.
-
We use MMEngine.Config to manage the model's config. There are several benefits of using
config fileto manage the training of the model:- Backup & Restore: Avoiding internal code modifications and improving the reproducibility of experiments.
- Flexible: The
config fileprovides a fast and flexible way to modify the training hyperparameter. - Friendliness:
config fileare separated from the model/training code, and by reading theconfig file, the user can quickly understand the hyperparameter of different models as well as the training strategies, such as optimizer, scheduler and data augmentation.
-
We use WandB to visualize the training log. Pytorch-Lightning supports a variety of loggers, such as tensorboard and wandb, but in this project, we use wandb as the default logger because it is very easy to share training logs, as well as very easy for multiple people to collaborate. In the future, we will share the logs of all models in wandb, so that you can easily view and compare the performance and convergence speed of different models.
Model Zoo
Learning-Aided Kalman Filtering
<div align="center"> <b>Overview</b> </div> <table align="center"> <tbody> <tr align="center" valign="center"> <td> <b>Supported methods</b> </td> <td> <b>Supported datasets</b> </td> <td> <b>Supported Tasks</b> </td> <td> <b>Others</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="https://ieeexplore.ieee.org/document/9733186">KalmanNet (ICASSP'2021, TSP'2022)</a></li> <li><a href="https://ieeexplore.ieee.org/abstract/document/10120968">Split-KalmanNet (TVT'2023)</a></li> <li><a href="https://ieeexplore.ieee.org/document/10289946">DANSE (EUSIPCO'2023, TSP'2024)</a></li> </ul> </td> <td> <ul> <li><a href="https://ieeexplore.ieee.org/document/9733186">Lorenz</a></li> <li><a href="http://journals.sagepub.com/doi/10.1177/0278364915614638">NCLT </a></li> <li><a href="">MOT17/MOT20/DanceTrack/SoccerNet For Motion Estimation </a></li> </a></li> </ul> </td> <td> <ul> <li><a href="https://ieeexplore.ieee.org/document/9733186">State Estimation</a></li> <li><a href="https://ieeexplore.ieee.org/document/10605082">Sensor Fusion</a></li> <li><a href="">Motion Estimation</a></li> </a></li> </ul> </td> <td> <ul> <li><b>Supported Loss</b></li> <ul> <li><a href="">MSELoss</a></li> <li><a href="">SmoothL1Loss</a></li> <li><a href="https://ieeexplore.ieee.org/document/10485649/">DANSELoss</a></li> <li><a href="">Any Pytorch Loss Function For Regression</a></li> </ul> </ul> <ul> <li><b>Supported Training Strategy</b></li> <ul> <li><a href="http://ieeexplore.ieee.org/document/58337/">Standard BPTT</a></li> <li><a href="https://ieeexplore.ieee.org/document/10605082">Alternative TBPTT</a></li> </ul> </ul> </td> </tbody> </table>Model-Based Kalman Filtering
<div align="center"> <b>Overview</b> </div> <table align="center"> <tbody> <tr align="center" valign="center"> <td> <b>Supported methods</b> </td> <td> <b>Supported datasets</b> </td> <td> <b>Supported Tasks</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="https://ieeexplore.ieee.org/abstract/document/5311910">Kalman filter </a></li> <li><a href="https://ieeexplore.ieee.org/document/1102206">Extended Kalman filter </a></li> <li><a href="https://ieeexplore.ieee.org/document/1299">Interacting Multiple Model </a></li> </ul> </td> <td> <ul> <li><a href="https://ieeexplore.ieee.org/document/9733186">Lorenz</a></li> <li><a href="http://journals.sagepub.com/doi/10.1177/0278364915614638">NCLT </a></li> <li><a href="">MOT17/MOT20/DanceTrack/SoccerNet For Motion Estimation </a></li> </a></li> </ul> </td> <td> <ul> <li><a href="https://ieeexplore.ieee.org/document/9733186">State Estimation</a></li> <li><a href="https://ieeexplore.ieee.org/document/10605082">Sensor Fusion</a></li> <li><a href="">Motion Estimation</a></li> </a></li> </ul> </td> </tbody> </table>Abbrv
| Method | Abbrv Name | | :--------: | :----------------------------: | | KNet | KalmanNet | | SKNet | Split-KalmanNet | | DANSE | DANSE | | SIKNet | Semantic-Independent KalmanNet |
Supervised Learning or Unsupervised Learning?
| Methods | Supervised Learning | Unsupervised Learning | | :---------: | :---------------------: | :-----------------------: | | KNet | ✔ | ✔ | | SKNet | ✔ | ✔ | | DANSE | ✔ | ✔ | | SIKNet | ✔ | ✔ |
BenchMark
✨Note
- The number of parameters of the same model is not fixed, this is because the number of parameters in the network is often related to the dimensions of the system state and the observation, and the dimensions of these two are often different for different tasks. Therefore, the number of parameters of the same model may vary greatly for different tasks.
- 🚩🚩Such of these model are extremely sensitive to numerical values, and different machines/parameters may cause drastic changes in performance (It is possible that the metric are slightly lower or slightly higher than in the original paper). We provide the best possible metrics for each model.
Motion Estimation in MOT Datasets
| Methods | Recall@50 | Recall@75 | Recall@50:95 | | :--------: | :-------: | :-------: | :----------: | | KF | | | | | KNet | | | | | SKNet | | | | | SIKNet | | | |
