TVQAplus
[ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering
Install / Use
/learn @jayleicn/TVQAplusREADME
TVQA+: Spatio-Temporal Grounding for Video Question Answering

We present the task of Spatio-Temporal Video Question Answering, which requires intelligent systems to simultaneously retrieve relevant moments and detect referenced visual concepts (people and objects) to answer natural language questions about videos. We first augment the TVQA dataset with 310.8k bounding boxes, linking depicted objects to visual concepts in questions and answers. We name this augmented version as TVQA+. We then propose Spatio-Temporal Answerer with Grounded Evidence (STAGE), a unified framework that grounds evidence in both the spatial and temporal domains to answer questions about videos. Comprehensive experiments and analyses demonstrate the effectiveness of our framework and how the rich annotations in our TVQA+ dataset can contribute to the question answering task. As a side product, by performing this joint task, our model is able to produce more insightful intermediate results.
In this repository, we provide PyTorch Implementation of the STAGE model, along with basic preprocessing and evaluation code for TVQA+ dataset.
TVQA+: Spatio-Temporal Grounding for Video Question Answering<br> Jie Lei, Licheng Yu, Tamara L. Berg, Mohit Bansal. [PDF]
Resources
- Data: TVQA+ dataset, please use this new link
- Website: http://tvqa.cs.unc.edu
- Submission: codalab evaluation server
- Related works: TVR (Moment Retrieval), TVC (Video Captioning), TVQA (Localized VideoQA)
Model
-
STAGE Overview. Spatio-Temporal Answerer with Grounded Evidence (STAGE), a unified framework that grounds evidence in both the spatial and temporal domains to answer questions about videos.

-
Prediction Examples

Requirements
- Python 2.7
- PyTorch 1.1.0 (should work for 0.4.0 - 1.2.0)
- tensorboardX
- tqdm
- h5py
- numpy
Training and Evaluation
1, Download and uncompress preprocessed features from Google Drive.
& uncompress the file into project root directory, you should get a dir `tvqa_plus_stage_features`
containing all the required feature files.
cd $PROJECT_ROOT; tar -xf tvqa_plus_stage_features_new.tar.gz
gdrive is a good tool to use for downloading the file. The features are changed, you have to re-download the features if you have our previous version
2, Run in debug mode to test your environment, path settings:
bash run_main.sh debug
3, Train the full STAGE model:
bash run_main.sh --add_local
note you will need around 30 GB of memory to load the data. Otherwise, you can additionally add --no_core_driver flag to stop loading
all the features into memory. After training, you should be able to get ~72.00% QA Acc, which is comparable to the reported number.
The trained model and config file are stored at ${$PROJECT_ROOT}/results/${MODEL_DIR}
4, Inference
bash run_inference.sh --model_dir ${MODEL_DIR} --mode ${MODE}
${MODE} could be valid or test. After inference, you will get a ${MODE}_inference_predictions.json
file in ${MODEL_DIR}, which is similar to the sample prediction file here eval/data/val_sample_prediction.json.
5, Evaluation
cd eval; python eval_tvqa_plus.py --pred_path ../results/${MODEL_DIR}/valid_inference_predictions.json --gt_path data/tvqa_plus_val.json
Note you can only evaluate val prediction here. To evaluate test set, please follow instructions here.
Citation
@inproceedings{lei2019tvqa,
title={TVQA+: Spatio-Temporal Grounding for Video Question Answering},
author={Lei, Jie and Yu, Licheng and Berg, Tamara L and Bansal, Mohit},
booktitle={Tech Report, arXiv},
year={2019}
}
TODO
- [x] Add data preprocessing scripts (provided preprocessed features)
- [x] Add model and training scripts
- [x] Add inference and evaluation scripts
Contact
- Dataset: faq-tvqa-unc [at] googlegroups.com
- Model: Jie Lei, jielei [at] cs.unc.edu
Related Skills
qqbot-channel
342.5kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
99.6k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
342.5kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
