Boxmot
BoxMOT: Pluggable SOTA multi-object tracking modules with support for axis-aligned and oriented bounding boxes
Install / Use
/learn @mikel-brostrom/BoxmotREADME
<img width="640" src="https://github.com/mikel-brostrom/boxmot/releases/download/v12.0.0/output_640.gif" alt="BoxMOT demo"> <br>
<a href="https://trendshift.io/repositories/13239" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13239" alt="mikel-brostrom%2Fboxmot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"></a>
</div>BoxMOT gives you one CLI and one Python API for running, evaluating, tuning, and exporting modern multi-object tracking pipelines. Swap trackers without rewriting your detector stack, reuse cached detections and embeddings across experiments, and benchmark locally on MOT-style datasets.
<div align="center" markdown="1">Installation • Metrics • CLI • Python API • Detection Layouts • Examples • Contributing
</div>Why BoxMOT
- One interface for
track,generate,eval,tune, andexport. - Works with detection, segmentation, and pose models as long as they emit boxes.
- Supports both motion-only trackers and motion + appearance trackers.
- Reuses saved detections and embeddings to speed up repeated evaluation and tuning.
- Handles both AABB and OBB detection layouts natively.
- Includes local benchmarking workflows for MOT17, MOT20, and DanceTrack ablation splits.
Installation
BoxMOT supports Python 3.9 through 3.12.
pip install boxmot
boxmot --help
Benchmark Results (MOT17 ablation split)
<div align="center" markdown="1"> <!-- START TRACKER TABLE -->| Tracker | Status | OBB | HOTA↑ | MOTA↑ | IDF1↑ | FPS | | :-----: | :-----: | :-: | :---: | :---: | :---: | :---: | | botsort | ✅ | ✅ | 69.418 | 78.232 | 81.812 | 12 | | boosttrack | ✅ | ❌ | 69.253 | 75.914 | 83.206 | 13 | | strongsort | ✅ | ❌ | 68.05 | 76.185 | 80.763 | 11 | | deepocsort | ✅ | ❌ | 67.796 | 75.868 | 80.514 | 12 | | bytetrack | ✅ | ✅ | 67.68 | 78.039 | 79.157 | 720 | | hybridsort | ✅ | ❌ | 67.39 | 74.127 | 79.105 | 25 | | ocsort | ✅ | ✅ | 66.441 | 74.548 | 77.899 | 890 | | sfsort | ✅ | ✅ | 62.653 | 76.87 | 69.184 | 6000 |
<!-- END TRACKER TABLE --><sub>Evaluation was run on the second half of the MOT17 training set because the validation split is not public and the ablation detector was trained on the first half. Results used pre-generated detections and embeddings with each tracker configured from its default repository settings.</sub>
</div>CLI
BoxMOT provides a unified CLI with a simple syntax:
boxmot MODE [OPTIONS] [DETECTOR] [REID] [TRACKER]
Modes:
track run detector + tracker on webcam, images, videos, directories, or streams
generate precompute detections and embeddings for later reuse
eval benchmark on MOT-style datasets and apply optional postprocessing
tune optimize tracker hyperparameters with multi-objective search
export export ReID models to deployment formats
Use boxmot MODE --help for mode-specific flags.
Quick examples:
# Track a webcam feed
boxmot track yolov8n osnet_x0_25_msmt17 deepocsort --source 0 --show
# Track a video, draw trajectories, and save the result
boxmot track yolov8n osnet_x0_25_msmt17 botsort --source video.mp4 --show-trajectories --save
# Evaluate on the MOT17 ablation split with GBRC postprocessing
boxmot eval yolox_x_MOT17_ablation lmbn_n_duke boosttrack --source MOT17-ablation --postprocessing gbrc --verbose
# Generate reusable detections and embeddings
boxmot generate yolov8n osnet_x0_25_msmt17 --source ./assets/MOT17-mini/train
# Tune tracker hyperparameters on a MOT-style dataset
boxmot tune yolov8n osnet_x0_25_msmt17 ocsort --source ./assets/MOT17-mini/train --n-trials 10
# Export a ReID model to ONNX and TensorRT with dynamic input
boxmot export --weights osnet_x0_25_msmt17.pt --include onnx --include engine --dynamic
Common --source values include 0, img.jpg, video.mp4, path/, path/*.jpg, YouTube URLs, and RTSP / RTMP / HTTP streams.
If you want to track only selected classes, pass a comma-separated list:
boxmot track yolov8s --source 0 --classes 16,17
Python API
If you already have detections from your own model, call tracker.update(...) once per frame inside your video loop:
from pathlib import Path
import cv2
import numpy as np
from boxmot import BotSort
tracker = BotSort(
reid_weights=Path("osnet_x0_25_msmt17.pt"),
device="cpu",
half=False,
)
cap = cv2.VideoCapture("video.mp4")
while True:
ok, frame = cap.read()
if not ok:
break
# Replace this with your detector output for the current frame.
# Expected AABB shape: (N, 6) = (x1, y1, x2, y2, conf, cls)
detections = np.empty((0, 6), dtype=np.float32)
# detections = your_detector(frame)
tracks = tracker.update(detections, frame)
tracker.plot_results(frame, show_trajectories=True)
print(tracks)
# AABB output: (N, 8) = (x1, y1, x2, y2, id, conf, cls, det_ind)
cv2.imshow("BoxMOT", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
For end-to-end detector integrations, see the notebooks in examples.
Detection Layouts
BoxMOT switches tracking mode from the detection tensor shape:
| Geometry | Input detections | Output tracks |
| --- | --- | --- |
| AABB | (N, 6) = (x1, y1, x2, y2, conf, cls) | (N, 8) = (x1, y1, x2, y2, id, conf, cls, det_ind) |
| OBB | (N, 7) = (cx, cy, w, h, angle, conf, cls) | (N, 9) = (cx, cy, w, h, angle, id, conf, cls, det_ind) |
OBB-specific tracking paths are enabled automatically when OBB detections are provided. Current OBB-capable trackers: bytetrack, botsort, ocsort, and sfsort.
Examples
The short commands above are enough to get started. The sections below keep the longer recipe list available without turning the README into a wall of commands.
<details> <summary><strong>Tracking recipes</strong></summary>Track from common sources:
# Webcam
boxmot track yolov8n osnet_x0_25_msmt17 deepocsort --source 0 --show
# Video file
boxmot track yolov8n osnet_x0_25_msmt17 botsort --source video.mp4 --save
# Image directory
boxmot track yolov8n osnet_x0_25_msmt17 bytetrack --source path/to/images --save-txt
# Stream or URL
boxmot track yolov8n osnet_x0_25_msmt17 ocsort --source 'rtsp://example.com/media.mp4'
# YouTube
boxmot track yolov8n osnet_x0_25_msmt17 boosttrack --source 'https://youtu.be/Zgi9g1ksQHc'
</details>
<details>
<summary><strong>Detector backends</strong></summary>
Swap detectors without changing the overall CLI:
# Ultralytics detection
boxmot track yolov8n
boxmot track yolo11n
# Segmentation and pose variants
boxmot track yolov8n-seg
boxmot track yolov8n-pose
# YOLOX
boxmot track yolox_s
# RF-DETR
boxmot track rf-detr-base
</details>
<details>
<summary><strong>Tracker swaps</strong></summary>
Use the same detector and ReID model while changing only the tracker:
boxmot track yolov8n osnet_x0_25_msmt17 deepocsort
boxmot track yolov8n osnet_x0_25_msmt17 strongsort
boxmot track yolov8n osnet_x0_25_msmt17 botsort
boxmot track yolov8n osnet_x0_25_msmt17 boosttrack
boxmot track yolov8n osnet_x0_25_msmt17 hybridsort
# Motion-only trackers
boxmot track yolov8n osnet_x0_25_msmt17 bytetrack
boxmot track yolov8n osnet_x0_25_msmt17 ocsort
boxmot track yolov8n osnet_x0_25_msmt17 sfsort
</details>
<details>
<summary><strong>Filtering and visualization</strong></summary>
Useful flags for inspection and debugging:
# Draw trajectories and show lost tracks
boxmot track yolov8n osnet_x0_25_msmt17 botsort --source video.mp4 --show-trajectories --show-lost --save
# Track only selected classes
boxmot track yolov8s --source 0 --classes 16,17
# Track each class independently
boxmot track yolov8n --source video.mp4 --per-class --save-txt
# Highlight one target ID
boxmot track yolov8n osnet_x0_25_msmt17 deepocsort --source video.mp4 --target-id 7 --show
</details>
<details>
<summary><strong>Evaluation and tuning</strong></summary>
Benchmark on built-in MOT-style dataset shortcuts or your own data:
# Reproduce README-style MOT17 results
boxmot eval yolox_x_MOT17_ablation lmbn_n_duke boosttrack --source MOT17-ablation --verbose
Related Skills
openhue
331.2kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
331.2kElevenLabs text-to-speech with mac-style say UX.
weather
331.2kGet current weather and forecasts via wttr.in or Open-Meteo
tweakcc
1.4kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
