OCSampler
[CVPR 2022] OCSampler: Compressing Videos to One Clip with Single-step Sampling
Install / Use
/learn @MCG-NJU/OCSamplerREADME
OCSampler
This repo is the implementation of OCSampler: Compressing Videos to One Clip with Single-step Sampling. (CVPR 2022)
Dependencies
- GPU: TITAN Xp
- GCC: 5.4.0
- Python: 3.6.13
- PyTorch: 1.5.1+cu102
- TorchVision: 0.6.1+cu102
- MMCV: 1.5.3
- MMAction2: 0.12.0
Installation:
a. Create a conda virtual environment and activate it.
conda create -n open-mmlab python=3.6.13 -y
conda activate open-mmlab
b. Install PyTorch and TorchVision following the official instructions, e.g.,
conda install pytorch==1.5.1 torchvision==0.6.1 cudatoolkit=10.2 -c pytorch
Note: Make sure that your compilation CUDA version and runtime CUDA version match. You can check the supported CUDA version for precompiled packages on the PyTorch website.
c. Install MMCV.
pip install mmcv
d. Clone the OCSampler repository.
git clone https://github.com/MCG-NJU/OCSampler
e. Install build requirements and then install MMAction2.
pip install -r requirements/build.txt
pip install -v -e . # or "python setup.py develop"
Data Preparation:
Please refer to the default MMAction2 dataset setup to set datasets correctly.
Specially, for ActivityNet dataset, we adopt the training annotation file with one label,
since there are only 6 out of 10024 videos with more than one labels and these labels are similar.
Owing to the different label mapping between MMAction2
and FrameExit in ActivityNet, we provide two kinds of annotation files.
You can check it in data/ActivityNet/ and configs/activitynet_*.py.
For Mini-Kinetics, please download Kinetics 400 and use the train/val splits file from AR-Net
Pretrained Models:
The pretrained models are provided in Google Drive
Training
Here we take training the OCSampler in ActivityNet dataset for example.
# bash tools/dist_train.sh {CONFIG_FILE} {GPUS} {--validate}
bash tools/dist_train.sh configs/activitynet_10to6_resnet50.py 8 --validate
Note that we directly port the weights of classification models provided from FrameExit.
Inference
Here we take evaluating the OCSampler in ActivityNet dataset for example.
# bash tools/dist_test.sh {CONFIG_FILE} {CHECKPOINT} {GPUS} {--eval mean_average_precision / top_k_accuracy}
bash tools/dist_test.sh configs/activitynet_10to6_resnet50.py modelzoo/anet_10to6_checkpoint.pth 8 --eval mean_average_precision
If you want to directly evaluating the OCSampler on other classifier, you can add again_load param in config file like this.
bash tools/dist_test.sh configs/activitynet_slowonly_inference_with_ocsampler.py modelzoo/anet_10to6_checkpoint.pth 8 --eval mean_average_precision
Citation
If you find OCSampler useful in your research, please cite us using the following entry:
@inproceedings{lin2022ocsampler,
title={OCSampler: Compressing Videos to One Clip with Single-step Sampling},
author={Lin, Jintao and Duan, Haodong and Chen, Kai and Lin, Dahua and Wang, Limin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13894--13903},
year={2022}
}
Acknowledge
In addition to the MMAction2 codebase, this repo contains modified codes from:
- FrameExit: for implementation of its classifier.
Related Skills
qqbot-channel
350.1kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.4k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
350.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
