SkillAgentSearch skills...

Tracklab

A Modular End-to-End Tracking Framework for Research and Development πŸŽ―πŸ”¬

Install / Use

/learn @TrackingLaboratory/Tracklab

README

TrackLab

<div align="center">

[CAMELTrack] [Soccernet-Gamestate] [MOT-Taxonomy]

</div>

TrackLab is an easy-to-use modular framework for Multi-Object pose/bbox Tracking that supports many methods, datasets and evaluation metrics.

<p align="center"> <img src="docs/assets/gifs/dancetrack0080.gif" width="30%" style="margin:1%;" alt="DanceTrack"> <img src="docs/assets/gifs/SportsMOT-v_gQNyhv8y0QY_c003.gif" width="30%" style="margin:1%;" alt="SportsMOT"> <img src="docs/assets/gifs/MOT17-09.gif" width="30%" style="margin:1%;" alt="MOT17"> <br> <img src="docs/assets/gifs/SportsMOT-v_4LXTUim5anY_c002.gif" width="30%" style="margin:1%;" alt="SportsMOT"> <img src="docs/assets/gifs/BEE24-13.gif" width="30%" style="margin:1%;" alt="BEE24"> <img src="docs/assets/gifs/SportsMOT-v_CW0mQbgYIF4_c004.gif" width="30%" style="margin:1%;" alt="SportsMOT"> </p>

πŸ—žοΈ News

πŸš€ Upcoming

  • [x] Public release of the codebase.
  • [x] Add support for more datasets (DanceTrack, MOTChallenge, SportsMOT, SoccerNet, ...).
  • [x] Add many more object detectors and pose estimators.
  • [ ] Improve documentation and add more tutorials.

🀝 How You Can Help

The TrackLab library is in its early stages, and we're eager to evolve it into a robust, mature tracking framework that can benefit the wider community. If you're interested in contributing, feel free to open a pull-request or reach out to us!

Introduction

Welcome to this official repository of TrackLab, a modular framework for multi-object tracking. TrackLab is designed for research purposes and supports many types of detectors (bounding boxes, pose, segmentation), datasets and evaluation metrics. Every component of TrackLab, such as detector, tracker, re-identifier, etc, is configurable via standard yaml files (Hydra config framework) TrackLab is designed to be easily extended to support new methods.

TrackLab is composed of multiple modules:

  1. Detectors (YOLO, YOLOX, RTMDet, RTDETR, ...).
  2. Pose Estimators (RTMPose, RTMO, VITPose, YOLOPose, ...).
  3. Re-identification models (KPReID, BPBReID, ...).
  4. Trackers (DeepSORT, StrongSORT, OC-SORT, ...).

Here's what makes TrackLab different from other existing tracking frameworks:

  • Fully modular framework to quickly integrate any detection/reid/tracking method or develop your own.
  • It allows supervised training of the ReID model on the tracking training set.
  • It provides a fully configurable visualization tool with the possibility to display any dev/debug information.
  • It supports online and offline tracking methods (compared to MMTracking, AlphaPose, LightTrack and other libs who only support online tracking).
  • It supports many tracking-related tasks:
    • Multi-object detection.
    • Multi-object (bbox) tracking.
    • Multi-person pose tracking.
    • Multi-person pose estimation.
    • Person re-identification.

πŸ“– Documentation

You can find the documentation at https://trackinglaboratory.github.io/tracklab/ or in the docs/ folder. After installing, you can run make html inside this folder to get an HTML version of the documentation.

βš™οΈ Installation Guide

πŸ› οΈ [Recommended] Using uv

Follow the instructions to install uv. uv is a fast Python package and virtual environment manager that simplifies project setup and dependency management.

If you just want to use TrackLab directly:

uv venv --python 3.12
uv pip install tracklab
uv run tracklab

If you’re integrating TrackLab into a project:

uv init
uv add tracklab
uv run tracklab

To update and run:

uv run -U tracklab

🐍 Using conda

Follow the instructions to install conda.

Create a conda environment with the required dependencies and install TrackLab:

conda create -n tracklab pip python=3.12 pytorch==2.6 torchvision==0.21 pytorch-cuda=12.4 -c pytorch -c nvidia -y
conda activate tracklab
pip install tracklab

[!NOTE] Make sure your system’s GPU and CUDA drivers are compatible with pytorch-cuda=12.4. Refer to the PyTorch compatibility matrix and change if needed.

To update later:

pip install -U tracklab

🧩 Manual Installation

You can install TrackLab directly from source using uv:

git clone https://github.com/TrackingLaboratory/tracklab.git
cd tracklab
uv run tracklab

Since we're using uv under the hood, uv will automatically create a virtual environment for you, and update the dependencies as you change them. You can also choose to install using conda, you'll then have to run the following when inside a virtual environment:

pip install -e .

πŸ“š External Dependencies

Some optional advanced modules and datasets require additional setup :

  • For MMDet, MMPose, OpenPifPaf: please refer to their respective documentation for installation instructions.
  • For BPBReID and KPReID: install using [uv] pip install "torchreid@git+https://github.com/victorjoos/keypoint_promptable_reidentification".
  • Get the SoccerNet Tracking dataset here, rename the root folder as SoccerNetMOT and put it under the global dataset directory (specified under the data_dir config as explained below). Otherwise, you can modify the dataset_path config in soccernet_mot.yaml with your custom SoccerNet dataset directory.

πŸ”¨ Setup

You will need to set up some variables before running the code :

  1. In configs/config.yaml :
    • data_dir: the directory where you will store the different datasets (must be an absolute path !)
    • All the parameters under the "Machine configuration" header
  2. In the corresponding modules (tracklab/configs/modules/.../....yaml) :
    • The batch_size
    • You might want to change the model hyperparameters

To launch TrackLab with the default configuration defined in configs/config.yaml, simply run:

tracklab

This command will create a directory called outputs which will have a ${experiment_name}/yyyy-mm-dd/hh-mm-ss/ structure. All the output files (logs, models, visualization, ...) from a run will be put inside this directory.

If you want to override some configuration parameters, e.g. to use another detection module or dataset, you can do so by modifying the corresponding parameters directly in the .yaml files under configs/.

All parameters are also configurable from the command-line, e.g.: (more info on Hydra's override grammar here)

tracklab 'data_dir=${project_dir}/data' 'model_dir=${project_dir}/models' modules/reid=bpbreid pipeline=[bbox_detector,reid,track]

${project_dir} is a variable that is configured to be the root of the project you're running the code in. When using it in a command, make sure to use single quotes (') as they would otherwise be seen as environment variables.

πŸ“ Configuration Options

To find all the (many) configuration options you have, use :

tracklab --help

The first section contains the configuration groups with all the available models/datasets/visualizations/..., while the second section shows all the possible options you can modify.

πŸ” Framework Overview

Hydra Configuration

πŸ‘·β€β™‚οΈ TODO: Describe TrackLab + Hydra configuration system.

Architecture

Here is an overview of the important TrackLab classes:

  • TrackingDataset: Abstract class to be instantiated when adding a new dataset. The TrackingDataset contains one TrackingSet for each split of the dataset (train, val, test, etc).
  • TrackingSet: A tracking set contains three Pandas dataframes:
    1. video_metadatas: contains one row of information per video (e.g. fps, width, height, etc).
    2. image_metadatas: contains one row of information per image (e.g. frame_id, video_id, etc).
    3. detections_gt: contains one row of information per ground truth detection (e.g. frame_id, video_id, bbox_ltwh, track_id, etc).
  • TrackerState: Core class that contains all the information about the current state of the tracker. All modules in the tracking pipeline update the tracker_state sequentially. The tracker_state contains one key dataframe:
    1. detections_pred: contains one row of information per predicted detection (e.g. frame_id, video_id, bbox_ltwh, track_id, reid embedding, etc).
  • TrackingEngine: This class is responsible for executing the entire tracking pipeline on the dataset. It loops over all videos
View on GitHub
GitHub Stars229
CategoryDevelopment
Updated14d ago
Forks31

Languages

Python

Security Score

100/100

Audited on Mar 25, 2026

No findings