Rulstm
Code for the Paper: Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision, 2019.
Install / Use
/learn @fpv-iplab/RulstmREADME
What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention
This repository hosts the code related to the following papers:
Antonino Furnari and Giovanni Maria Farinella, Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). 2020. Download
Antonino Furnari and Giovanni Maria Farinella, What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision, 2019. Download
Please also see the project web page at http://iplab.dmi.unict.it/rulstm.
If you use the code/models hosted in this repository, please cite the following papers:
@article{furnari2020rulstm,
author = {Antonino Furnari and Giovanni Maria Farinella},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)},
title = {Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video},
year = {2020}
}
@inproceedings{furnari2019rulstm,
title = { What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. },
author = { Antonino Furnari and Giovanni Maria Farinella },
year = { 2019 },
booktitle = { International Conference on Computer Vision (ICCV) },
}
Updates:
- 23/08/2023 A quickstart notebook is available here. You can also open it directly in Colab clicking on the badge:
- 28/06/2021 We are now providing object detections on all frames of EPIC-KITCHENS-100. Please see this README (below) for more information;
- 11/01/2021 We have updated the archive providing the EGTEA Gaze+ pre-extracted features. Please see this README (below) for more information;
- 01/10/2020 We are now sharing the rgb/flow/obj EPIC-KITCHENS-100 features and pre-trained models used to report baseline results in the Rescaling Egocentric Vision paper;
- 04/05/2020 We have now published an extended version of this work on PAMI. Please check the text above for the updated references;
- 23/03/2020 We are now providing pre-extracted features for EGTEA Gaze+. See README for more information;
- 11/10/2019 We are now also providing TSN and object-based features extracted for each frame of EPIC-KITCHENS. They can be downloaded using the
download_data_full.shscript rather thandownload_data.sh; - 23/10/2019 Added some scripts to show how to extract features from videos. The scripts can be found under
FEATEXTand are documented in this README.
Overview
This repository provides the following components:
- The official PyTorch implementation of the proposed Rolling-Unrolling LSTM approach, including Sequence-Completion Pre-Training and Modality ATTention (MATT);
- A program to train, validate and test the proposed method on the EPIC-KITCHENS-55 and EPIC-KITCHENS-100 datasets;
- Pre-extracted features for EPIC-KITCHENS-55 and EPIC-KITCHENS-100. Specifically, we include:
- RGB features: extracted from RGB iamges using a BNInception CNN trained for the task of egocentric action recognition using Temporal Segment Networks;
- Flow features: similar to RGB features, but extracted with a BNInception CNN trained on optical flow;
- OBJ features: object-based features obtained by running a Faster R-CNN object detector trained on EPIC-KITCHENS-55;
- The checkpoints of the RGB/Flow/OBJ/Fusion models trained for both tasks: egocentric action anticipation and early action recognition;
- The checkpoints of the TSN models (to be used with the official PyTorch implementation of TSN);
- The checkpoint of the Faster R-CNN object detector trained on EPIC-KITCHENS-55;
- The training/validation split used for the experiments. Note that the TSN and Faster R-CNN models have been trained on the training set of this split.
Please, refer to the paper for more technical details. The following sections document the released material.
RU-LSTM Implementation and main training/validation/test program
The provided implementation and training/validation/test program can be found in the RULSTM directory. In order to proceed to training, it is necessary to retrieve the pre-extracted features from our website. To save space and bandwidth, we provide features extracted only on the subset of frames used for the experiments (we sampled frames at about 4fps - please see the paper). These features are sufficient to train/validate/test the methods on the whole EPIC-KITCHENS-55 dataset following the settings reported in the paper.
Requirements
To run the code, you will need a Python3 interpreter and some libraries (including PyTorch).
Anaconda
An Anaconda environment file with a minimal set of requirements is provided in environment.yml. If you are using Anaconda, you can create a suitable environment with:
conda env create -f environment.yml
To activate the environment, type:
conda activate rulstm
Pip
If you are not using Anaconda, we provide a list of libraries in requirements.txt. You can install these libraries with:
pip install -r requirements.txt
Dataset, training/validaiton splits, and features
We provide CSVs for training/validation/and testing on EPIC-KITCHENS-55 in the data/ek55 directory. A brief description of each csv follows:
actions.csv: maps action ids to (verb,noun) pairs;EPIC_many_shot_nouns.csv: contains the list of many shot nouns for class-aware metrics (please refer to the EPIC-KITCHENS-55 paper for more details);EPIC_many_shot_verbs.csv: similar to the previous one, but related to verbs;test_seen.csv: contains the timestamps (expressed in number of frames) of the "seen" test set (S1);test_unseen.csv: contains the timestamps (expressed in number of frames) of the "unseen" test set (S2);training.csv: contains annotations for the training set in our training/validation split;validation.csv: contains annotations for the validation set in our training/validation split;training_videos.csv: contains the list of training videos in our training/validation split;validation_videos.csv: contains the list of validation videos in our training/validation split; We also provide CSVs for training/validation/testing on EPIC-KITCHENS-100 in thedata/ek100directory.
Training and validation CSVs report the following columns:
- Annotation ID;
- Video name (without extension);
- Start frame;
- End frame;
- Verb ID;
- Noun ID;
- Action ID.
The test CSVs do not report the last three columns since test annotations are not public. These CSVs are provided to allow producing predicitons in JSON format to be submitted to the leaderboard.
Please note that time-stamps are reported in terms of frame numbers in the csvs. This has been done by assuming a fixed framerate of 30fps. Since the original videos have been collected a different framerates, we first converted all videos to 30fps using ffmpeg.
We provide pre-extracted features. The features are stored in LMDB datasets. To download them, run the following commands:
- EPIC-KITCHENS-55:
./scripts/download_data_ek55.sh;
Alternatively, you can download features extracted from each frame by using the script:
- EPIC-KITCHENS-55:
./scripts/download_data_ek55_full.sh; - EPIC-KITCHENS-100:
./scripts/download_data_ek100_full.sh;
Please note that this download is significantly heavier and that it is not required to run the training with default parameters on EPIC-KITCHENS-55.
This should populate three directories data/ek{55|100}/rgb, data/ek{55|100}/flow, data/ek{55|100}/obj with the LMDB datasets.
Trainining
Models can be trained using the main.py program. For instance, to train the RGB branch for the action anticipation task, use the following commands:
EPIC-KITCHENS-55
mkdir models/python main.py train data/ek55 models/ek55 --modality rgb --task anticipation --sequence_completionpython main.py train data/ek55 models/ek55 --modality rgb --task anticipation
EPIC-KITCHENS-100
mkdir models/python main.py train data/ek100 models/ek100 --modality rgb --task anticipation --sequence_completion --num_class 3806 --mt5rpython main.py train data/ek100 models/ek100 --modality rgb --task anticipation --num_class 3806 --mt5r
This will first pre-train using sequence completion, then fine-tune to the main anticipation task. All models will be stored in the models/ek{55|100} directory.
Optionally, a --visdom flag can be passed to the training program in order to enable loggin using visdom. To allow this, it is necessary to install visdom with:
pip install visdom
And run it with:
python -m visdom.server
Similar commands can be used to train all models. The following scripts contain all commands required to train the models for egocentric action anticipation and early action recognition:
scripts/train_anticipation_ek{55|100}.sh;scripts/train_recognition_ek55.sh.
Validation
The anticipation models can be validated using the following commands:
Action Anticipation
EPIC-KITCHENS-55
- RGB branch: `python main.py validate data/ek55 models/ek55 --modality rgb -
