OpenSTL
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
Install / Use
/learn @chengtan9907/OpenSTLREADME
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
<p align="left"> <a href="https://arxiv.org/abs/2306.11249" alt="arXiv"> <img src="https://img.shields.io/badge/arXiv-2306.11249-b31b1b.svg?style=flat" /></a> <a href="https://github.com/chengtan9907/OpenSTL/blob/master/LICENSE" alt="license"> <img src="https://img.shields.io/badge/license-Apache--2.0-%23002FA7" /></a> <!-- <a href="https://huggingface.co/OpenSTL" alt="Huggingface"> <img src="https://img.shields.io/badge/huggingface-OpenSTL-blueviolet" /></a> --> <a href="https://openstl.readthedocs.io/en/latest/" alt="docs"> <img src="https://readthedocs.org/projects/openstl/badge/?version=latest" /></a> <a href="https://github.com/chengtan9907/OpenSTL/issues" alt="docs"> <img src="https://img.shields.io/github/issues-raw/chengtan9907/SimVPv2?color=%23FF9600" /></a> <a href="https://github.com/chengtan9907/OpenSTL/issues" alt="resolution"> <img src="https://img.shields.io/badge/issue%20resolution-1%20d-%23B7A800" /></a> <a href="https://img.shields.io/github/stars/chengtan9907/OpenSTL" alt="arXiv"> <img src="https://img.shields.io/github/stars/chengtan9907/OpenSTL" /></a> </p>📘Documentation | 🛠️Installation | 🚀Model Zoo | 🤗Huggingface | 👀Visualization | 🆕News
Introduction
OpenSTL is a comprehensive benchmark for spatio-temporal predictive learning, encompassing a broad spectrum of methods and diverse tasks, ranging from synthetic moving object trajectories to real-world scenarios such as human motion, driving scenes, traffic flow, and weather forecasting. OpenSTL offers a modular and extensible framework, excelling in user-friendliness, organization, and comprehensiveness. The codebase is organized into three abstracted layers, namely the core layer, algorithm layer, and user interface layer, arranged from the bottom to the top. We support PyTorch Lightning implementation OpenSTL-Lightning (recommended) and naive PyTorch version OpenSTL.
<p align="center" width="100%"> <img src='https://github.com/chengtan9907/OpenSTL/assets/34480960/4f466441-a78a-405c-beb6-00a37e3d3827' width="90%"> </p> <!-- <p align="center" width="100%"> <img src='https://github-production-user-asset-6210df.s3.amazonaws.com/44519745/246222226-61e6b8e8-959c-4bb3-a1cd-c994b423de3f.png' width="90%"> </p> --> <p align="right">(<a href="#top">back to top</a>)</p>Overview
<details open> <summary>Major Features and Plans</summary>-
Flexiable Code Design. OpenSTL decomposes STL algorithms into
methods(training and prediction),models(network architectures), andmodules, while providing unified experiment API. Users can develop their own STL algorithms with flexible training strategies and networks for different STL tasks. -
Standard Benchmarks. OpenSTL will support standard benchmarks of STL algorithms image with training and evaluation as many open-source projects (e.g., MMDetection and USB). We are working on training benchmarks and will update results synchronizingly.
-
Plans. We plan to provide benchmarks of various STL methods and MetaFormer architectures based on SimVP in various STL application tasks, e.g., video prediction, weather prediction, traffic prediction, etc. We encourage researchers interested in STL to contribute to OpenSTL or provide valuable advice!
openstl/apicontains an experiment runner.openstl/corecontains core training plugins and metrics.openstl/datasetscontains datasets and dataloaders.openstl/methods/contains training methods for various video prediction methods.openstl/models/contains the main network architectures of various video prediction methods.openstl/modules/contains network modules and layers.tools/contains the executable python filestools/train.pyandtools/test.pywith possible arguments for training, validating, and testing pipelines.
News and Updates
[2023-12-15] OpenSTL-Lightning (OpenSTL v1.0.0) is released.
[2023-09-23] The OpenSTL paper has been accepted by NeurIPS 2023 Dataset and Benchmark Track! arXiv / Zhihu.
[2023-06-19] OpenSTL v0.3.0 is released and will be enhanced in #25.
Installation
This project has provided an environment setting file of conda, users can easily reproduce the environment by the following commands:
git clone https://github.com/chengtan9907/OpenSTL
cd OpenSTL
conda env create -f environment.yml
conda activate OpenSTL
python setup.py develop
<details close>
<summary>Dependencies</summary>
- argparse
- dask
- decord
- fvcore
- hickle
- lpips
- matplotlib
- netcdf4
- numpy
- opencv-python
- packaging
- pandas
- python<=3.10.8
- scikit-image
- scikit-learn
- torch
- timm
- tqdm
- xarray==0.19.0
Please refer to install.md for more detailed instructions.
Getting Started
Please see get_started.md for the basic usage. Here is an example of single GPU non-distributed training SimVP+gSTA on Moving MNIST dataset.
bash tools/prepare_data/download_mmnist.sh
python tools/train.py -d mmnist --lr 1e-3 -c configs/mmnist/simvp/SimVP_gSTA.py --ex_name mmnist_simvp_gsta
Tutorial on using Custom Data
For the convenience of users, we provide a tutorial on how to train, evaluate, and visualize with OpenSTL on custom data. This tutorial enables users to quickly build their own projects using OpenSTL. For more details, please refer to the tutorial.ipynb in the examples/ directory.
We also provide a Colab demo of this tutorial:
<a href="https://colab.research.google.com/drive/19uShc-1uCcySrjrRP3peXf2RUNVzCjHh?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p align="right">(<a href="#top">back to top</a>)</p>Overview of Model Zoo and Datasets
We support various spatiotemporal prediction methods and provide benchmarks on various STL datasets. We are working on adding new methods and collecting experiment results.
-
Spatiotemporal Prediction Methods.
<details open> <summary>Currently supported methods</summary>- [x] ConvLSTM (NeurIPS'2015)
- [x] PredRNN (NeurIPS'2017)
- [x] PredRNN++ (ICML'2018)
- [x] E3D-LSTM (ICLR'2018)
- [x] MIM (CVPR'2019)
- [x] PhyDNet (CVPR'2020)
- [x] MAU (NeurIPS'2021)
- [x] PredRNN.V2 (TPAMI'2022)
- [x] SimVP (CVPR'2022)
- [x] SimVP.V2 (ArXiv'2022)
- [x] TAU (CVPR'2023)
- [x] MMVP (ICCV'2023)
- [x] SwinLSTM (ICCV'2023)
- [x] WaST (AAAI'2024)
- [x] ViT (Vision Transformer) (ICLR'2021)
- [x] Swin-Transformer (ICCV'2021)
- [x] MLP-Mixer (NeurIPS'2021)
- [x] ConvMixer (Openreview'2021)
- [x] UniFormer (ICLR'2022)
- [x] PoolFormer (CVPR'2022)
- [x] ConvNeXt (CVPR'2022)
- [x] VAN (ArXiv'2022)
- [x] IncepU (SimVP.V1) (CVPR'2022)
- [x] gSTA (SimVP.V2) (ArXiv'2022)
- [x] HorNet (NeurIPS'2022)
- [x] MogaNet (ArXiv'2022)
-
Spatiotemporal Predictive Learning Benchmarks (prepare_data or Baidu Cloud).
<details open> <summary>Currently supported datasets</summary>- [x] BAIR Robot Pushing (CoRL'2017) [download] [config]
- [x] Human3.6M (TPAMI'2014) [download] [config]
- [x] KTH Action (ICPR'2004) [download] [config]
- [x] KittiCaltech Pedestrian (IJRR'2013) [[download](https://www.dropbox.com/s/rpwlnn6j39jjme4
Related Skills
qqbot-channel
352.5kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.7k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
352.5kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
