Unicorn
[ECCV'22 Oral] Towards Grand Unification of Object Tracking
Install / Use
/learn @MasterBin-IIAU/UnicornREADME
Unicorn :unicorn: : Towards Grand Unification of Object Tracking
This repository is the project page for the paper Towards Grand Unification of Object Tracking
Highlight
- Unicorn is accepted to ECCV 2022 as an oral presentation!
- Unicorn first demonstrates grand unification for four object-tracking tasks.
- Unicorn achieves strong performance in eight tracking benchmarks.
Introduction
-
The object tracking field mainly consists of four sub-tasks: Single Object Tracking (SOT), Multiple Object Tracking (MOT), Video Object Segmentation (VOS), and Multi-Object Tracking and Segmentation (MOTS). Most previous approaches are developed for only one of or part of the sub-tasks.
-
For the first time, Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Besides, Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters.
This repository supports the following tasks:
Image-level
- Object Detection
- Instance Segmentation
Video-level
- Single Object Tracking (SOT)
- Multiple Object Tracking (MOT)
- Video Object Segmentation (VOS)
- Multi-Object Tracking and Segmentation (MOTS)
Demo
Unicorn conquers four tracking tasks (SOT, MOT, VOS, MOTS) using the same network with the same parameters.
https://user-images.githubusercontent.com/6366788/180479685-c2f4bf3e-3faf-4abe-b401-80150877348d.mp4
Results
SOT
<div align="center"> <img src="assets/SOT.png" width="600pix"/> </div>MOT (MOT17)
<div align="center"> <img src="assets/MOT.png" width="600pix"/> </div>MOT (BDD100K)
<div align="center"> <img src="assets/MOT-BDD.png" width="600pix"/> </div>VOS
<div align="center"> <img src="assets/VOS.png" width="600pix"/> </div>MOTS (MOTS Challenge)
<div align="center"> <img src="assets/MOTS.png" width="600pix"/> </div>MOTS (BDD100K MOTS)
<div align="center"> <img src="assets/MOTS-BDD.png" width="600pix"/> </div>Getting started
- Installation: Please refer to install.md for more details.
- Data preparation: Please refer to data.md for more details.
- Training: Please refer to train.md for more details.
- Testing: Please refer to test.md for more details.
- Model zoo: Please refer to model_zoo.md for more details.
Citing Unicorn
If you find Unicorn useful in your research, please consider citing:
@inproceedings{unicorn,
title={Towards Grand Unification of Object Tracking},
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={ECCV},
year={2022}
}
Acknowledgments
- Thanks YOLOX and CondInst for providing strong baseline for object detection and instance segmentation.
- Thanks STARK and PyTracking for providing useful inference and evaluation toolkits for SOT and VOS.
- Thanks ByteTrack, QDTrack and PCAN for providing useful data-processing scripts and evalution codes for MOT and MOTS.
Related Skills
docs-writer
98.8k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
331.7kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
arscontexta
2.8kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.

