Tracklab
A Modular End-to-End Tracking Framework for Research and Development π―π¬
Install / Use
/learn @TrackingLaboratory/TracklabREADME
![]()
[CAMELTrack] [Soccernet-Gamestate] [MOT-Taxonomy]
</div>TrackLab is an easy-to-use modular framework for Multi-Object pose/bbox Tracking that supports many methods, datasets and evaluation metrics.
<p align="center"> <img src="docs/assets/gifs/dancetrack0080.gif" width="30%" style="margin:1%;" alt="DanceTrack"> <img src="docs/assets/gifs/SportsMOT-v_gQNyhv8y0QY_c003.gif" width="30%" style="margin:1%;" alt="SportsMOT"> <img src="docs/assets/gifs/MOT17-09.gif" width="30%" style="margin:1%;" alt="MOT17"> <br> <img src="docs/assets/gifs/SportsMOT-v_4LXTUim5anY_c002.gif" width="30%" style="margin:1%;" alt="SportsMOT"> <img src="docs/assets/gifs/BEE24-13.gif" width="30%" style="margin:1%;" alt="BEE24"> <img src="docs/assets/gifs/SportsMOT-v_CW0mQbgYIF4_c004.gif" width="30%" style="margin:1%;" alt="SportsMOT"> </p>ποΈ News
- [2025.05.02] π Released CAMELTrack: Context-Aware Multi-cue ExpLoitation for Online Multi-Object Tracking.
- [2025.05.22] Many more detectors (
YOLO,YOLOX,RTMDet,RTDetr) and pose estimators (YOLO-pose,RTMO,VITPose,RTMPose). - [2024.02.05] π Public release.
π Upcoming
- [x] Public release of the codebase.
- [x] Add support for more datasets (
DanceTrack,MOTChallenge,SportsMOT,SoccerNet, ...). - [x] Add many more object detectors and pose estimators.
- [ ] Improve documentation and add more tutorials.
π€ How You Can Help
The TrackLab library is in its early stages, and we're eager to evolve it into a robust, mature tracking framework that can benefit the wider community. If you're interested in contributing, feel free to open a pull-request or reach out to us!
Introduction
Welcome to this official repository of TrackLab, a modular framework for multi-object tracking. TrackLab is designed for research purposes and supports many types of detectors (bounding boxes, pose, segmentation), datasets and evaluation metrics. Every component of TrackLab, such as detector, tracker, re-identifier, etc, is configurable via standard yaml files (Hydra config framework) TrackLab is designed to be easily extended to support new methods.
TrackLab is composed of multiple modules:
- Detectors (
YOLO,YOLOX,RTMDet,RTDETR, ...). - Pose Estimators (
RTMPose,RTMO,VITPose,YOLOPose, ...). - Re-identification models (
KPReID,BPBReID, ...). - Trackers (
DeepSORT,StrongSORT,OC-SORT, ...).
Here's what makes TrackLab different from other existing tracking frameworks:
- Fully modular framework to quickly integrate any detection/reid/tracking method or develop your own.
- It allows supervised training of the ReID model on the tracking training set.
- It provides a fully configurable visualization tool with the possibility to display any dev/debug information.
- It supports online and offline tracking methods (compared to
MMTracking,AlphaPose,LightTrackand other libs who only support online tracking). - It supports many tracking-related tasks:
- Multi-object detection.
- Multi-object (bbox) tracking.
- Multi-person pose tracking.
- Multi-person pose estimation.
- Person re-identification.
π Documentation
You can find the documentation at https://trackinglaboratory.github.io/tracklab/ or in the docs/ folder.
After installing, you can run make html inside this folder to get an HTML version of the documentation.
βοΈ Installation Guide
π οΈ [Recommended] Using uv
Follow the instructions to install uv.
uv is a fast Python package and virtual environment manager that simplifies project setup and dependency management.
If you just want to use TrackLab directly:
uv venv --python 3.12
uv pip install tracklab
uv run tracklab
If youβre integrating TrackLab into a project:
uv init
uv add tracklab
uv run tracklab
To update and run:
uv run -U tracklab
π Using conda
Follow the instructions to install conda.
Create a conda environment with the required dependencies and install TrackLab:
conda create -n tracklab pip python=3.12 pytorch==2.6 torchvision==0.21 pytorch-cuda=12.4 -c pytorch -c nvidia -y
conda activate tracklab
pip install tracklab
[!NOTE] Make sure your systemβs GPU and CUDA drivers are compatible with pytorch-cuda=12.4. Refer to the PyTorch compatibility matrix and change if needed.
To update later:
pip install -U tracklab
π§© Manual Installation
You can install TrackLab directly from source using uv:
git clone https://github.com/TrackingLaboratory/tracklab.git
cd tracklab
uv run tracklab
Since we're using uv under the hood, uv will automatically create a virtual environment for you, and
update the dependencies as you change them. You can also choose to install using conda, you'll then have
to run the following when inside a virtual environment:
pip install -e .
π External Dependencies
Some optional advanced modules and datasets require additional setup :
- For MMDet, MMPose, OpenPifPaf: please refer to their respective documentation for installation instructions.
- For BPBReID and KPReID: install using
[uv] pip install "torchreid@git+https://github.com/victorjoos/keypoint_promptable_reidentification". - Get the SoccerNet Tracking dataset here, rename the root folder as
SoccerNetMOTand put it under the global dataset directory (specified under thedata_dirconfig as explained below). Otherwise, you can modify thedataset_pathconfig in soccernet_mot.yaml with your custom SoccerNet dataset directory.
π¨ Setup
You will need to set up some variables before running the code :
- In configs/config.yaml :
data_dir: the directory where you will store the different datasets (must be an absolute path !)- All the parameters under the "Machine configuration" header
- In the corresponding modules (
tracklab/configs/modules/.../....yaml) :- The
batch_size - You might want to change the model hyperparameters
- The
To launch TrackLab with the default configuration defined in configs/config.yaml, simply run:
tracklab
This command will create a directory called outputs which will have a ${experiment_name}/yyyy-mm-dd/hh-mm-ss/ structure.
All the output files (logs, models, visualization, ...) from a run will be put inside this directory.
If you want to override some configuration parameters, e.g. to use another detection module or dataset, you can do so by modifying the corresponding parameters directly in the .yaml files under configs/.
All parameters are also configurable from the command-line, e.g.: (more info on Hydra's override grammar here)
tracklab 'data_dir=${project_dir}/data' 'model_dir=${project_dir}/models' modules/reid=bpbreid pipeline=[bbox_detector,reid,track]
${project_dir} is a variable that is configured to be the root of the project you're running the code in. When using
it in a command, make sure to use single quotes (') as they would otherwise be seen as
environment variables.
π Configuration Options
To find all the (many) configuration options you have, use :
tracklab --help
The first section contains the configuration groups with all the available models/datasets/visualizations/..., while the second section
shows all the possible options you can modify.
π Framework Overview
Hydra Configuration
π·ββοΈ TODO: Describe TrackLab + Hydra configuration system.
Architecture
Here is an overview of the important TrackLab classes:
- TrackingDataset: Abstract class to be instantiated when adding a new dataset. The
TrackingDatasetcontains oneTrackingSetfor each split of the dataset (train, val, test, etc).- Example: SoccerNetMOT. The SoccerNet Tracking dataset.
- TrackingSet: A tracking set contains three Pandas dataframes:
video_metadatas: contains one row of information per video (e.g. fps, width, height, etc).image_metadatas: contains one row of information per image (e.g. frame_id, video_id, etc).detections_gt: contains one row of information per ground truth detection (e.g. frame_id, video_id, bbox_ltwh, track_id, etc).
- TrackerState: Core class that contains all the information about the current state of the tracker. All modules in the tracking pipeline update the
tracker_statesequentially. Thetracker_statecontains one key dataframe:detections_pred: contains one row of information per predicted detection (e.g. frame_id, video_id, bbox_ltwh, track_id, reid embedding, etc).
- TrackingEngine: This class is responsible for executing the entire tracking pipeline on the dataset. It loops over all videos