SkillAgentSearch skills...

EvTexture

[ICML 2024 & TPAMI 2026] EvTexture & EvTexture++: Event-Driven Texture Enhancement for Video Super-Resolution

Install / Use

/learn @DachunKai/EvTexture
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

EvTexture & EvTexture++ (ICML 2024 & TPAMI 2026)

Official PyTorch implementation for the "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" (ICML 2024) and its journal extension "EvTexture++: Event-Driven Texture Enhancement for Video Super-Resolution" (IEEE TPAMI 2026).

<p align="center"> <b>EvTexture (ICML 2024)</b>: 🌐 <a href="https://dachunkai.github.io/evtexture.github.io/" target="_blank">Project</a> | 📃 <a href="https://arxiv.org/abs/2406.13457" target="_blank">Paper</a> | 🖼️ <a href="https://docs.google.com/presentation/d/1nbDb39TFb374DzBwdz5v20kIREUA0nBH/edit?usp=sharing" target="_blank">Poster</a> <br> <b>EvTexture++ (TPAMI 2026)</b>: 📃 <a href="https://ieeexplore.ieee.org/document/11369964" target="_blank">IEEE Xplore</a> </p>

Authors: Dachun Kai<sup>:email:️</sup>, Jiayao Lu, Yueyi Zhang<sup>:email:️</sup>, Xiaoyan Sun, University of Science and Technology of China

Feel free to ask questions. If our work helps, please don't hesitate to give us a :star:!

[News] The extended journal version, EvTexture++, has been accepted by IEEE TPAMI 2026. The source code and pre-trained models for EvTexture++ are currently under preparation and will be released in this repository in due course.

:rocket: News

<!-- - [ ] Provide a script for inference on the user's own video -->
  • [x] 2026/02/02: :tada: :tada: The journal extension EvTexture++ is accepted by IEEE TPAMI.
  • [x] 2024/07/02: Release the colab file for a quick test
  • [x] 2024/06/28: Release details to prepare datasets
  • [x] 2024/06/08: Publish docker image
  • [x] 2024/06/08: Release pretrained models and test sets for quick testing
  • [x] 2024/06/07: Video demos released
  • [x] 2024/05/25: Initialize the repository
  • [x] 2024/05/02: :tada: :tada: Our paper was accepted in ICML'2024

:bookmark: Table of Content

  1. Video Demos
  2. Code
  3. Citation
  4. Contact
  5. License and Acknowledgement

:fire: Video Demos

A $4\times$ upsampling results on the Vid4 and REDS4 test sets.

https://github.com/DachunKai/EvTexture/assets/66354783/fcf48952-ea48-491c-a4fb-002bb2d04ad3

https://github.com/DachunKai/EvTexture/assets/66354783/ea3dd475-ba8f-411f-883d-385a5fdf7ff6

https://github.com/DachunKai/EvTexture/assets/66354783/e1e6b340-64b3-4d94-90ee-54f025f255fb

https://github.com/DachunKai/EvTexture/assets/66354783/01880c40-147b-4c02-8789-ced0c1bff9c4

Code

Installation

  • Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.

  • Run in Conda

    conda create -y -n evtexture python=3.7
    conda activate evtexture
    pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl
    pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl
    git clone https://github.com/DachunKai/EvTexture.git
    cd EvTexture && pip install -r requirements.txt && python setup.py develop
    
  • Run in Docker :clap:

    Note: before running the Docker image, make sure to install nvidia-docker by following the official instructions.

    [Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.

    docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest
    

    [Option 2] We also provide a Dockerfile that you can use to build the image yourself.

    cd EvTexture && docker build -t evtexture ./docker
    

    The pulled or self-built Docker image containes a complete conda environment named evtexture. After running the image, you can mount your data and operate within this environment.

    source activate evtexture && cd EvTexture && python setup.py develop
    

Test

  1. Download the pretrained models from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)) and place them to experiments/pretrained_models/EvTexture/. The network architecture code is in evtexture_arch.py.

    • EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for $4\times$ SR scale.
    • EvTexture_Vimeo90K_BIx4.pth: trained on Vimeo-90K dataset with BI degradation for $4\times$ SR scale.
  2. Download the preprocessed test sets (including events) for REDS4 and Vid4 from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)), and place them to datasets/.

    • Vid4_h5: HDF5 files containing preprocessed test datasets for Vid4.

    • REDS4_h5: HDF5 files containing preprocessed test datasets for REDS4.

  3. Run the following command:

    • Test on Vid4 for 4x VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
      
    • Test on REDS4 for 4x VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
      
      This will generate the inference results in results/. The output results on REDS4 and Vid4 can be downloaded from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)).

Data Preparation

  • Both video and event data are required as input, as shown in the snippet. We package each video and its event data into an HDF5 file.

  • Example: The structure of calendar.h5 file from the Vid4 dataset is shown below.

    calendar.h5
    ├── images
    │   ├── 000000 # frame, ndarray, [H, W, C]
    │   ├── ...
    ├── voxels_f
    │   ├── 000000 # forward event voxel, ndarray, [Bins, H, W]
    │   ├── ...
    ├── voxels_b
    │   ├── 000000 # backward event voxel, ndarray, [Bins, H, W]
    │   ├── ...
    
  • To simulate and generate the event voxels, refer to the dataset preparation details in DataPreparation.md.

Inference on your own video

:hammer_and_wrench: We are developing a convenient script to allow users to quickly use our EvTexture model to upscale their own videos. However, our spare time is limited, so please stay tuned!

:blush: Citation

If the code and pre-trained models facilitate your research, please consider citing the corresponding papers. :smiley:

@article{kai2026evtexture++,
  title={{E}v{T}exture++: {E}vent-{D}riven {T}exture {E}nhancement for {V}ideo {S}uper-{R}esolution},
  author={Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2026},
  doi={10.1109/TPAMI.2026.3660020}
}

@inproceedings{kai2024evtexture,
  title={{E}v{T}exture: {E}vent-driven {T}exture {E}nhancement for {V}ideo {S}uper-{R}esolution},
  author={Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan},
  booktitle={Proceedings of the 41st International Conference on Machine Learning},
  pages={22817--22839},
  year={2024},
  volume={235},
  publisher={PMLR}
}

Contact

If you meet any problems, please describe them in issues or contact:

License and Acknowledgement

This project is released under the Apache-2.0 license. Our work is built upon BasicSR, which is an open source toolbox for image/video restoration tasks. Thanks to the inspirations and codes from RAFT, event_utils and EvTexture-jupyter.

Related Skills

View on GitHub
GitHub Stars1.2k
CategoryContent
Updated5d ago
Forks74

Languages

Python

Security Score

100/100

Audited on Mar 18, 2026

No findings