DASH
[ICCV 2025] DASH: Self-Supervised Decomposition and 4D Hash Encoding for Real-Time Dynamic Scene Rendering
Install / Use
/learn @chenj02/DASHREADME
for Real-Time Dynamic Scene Rendering</strong></h3>
<p align="center"> <a href="">Jie Chen</a>, <a href="">Zhangchi Hu</a>, <a href="">Peixi Wu</a>, <a href="">Huyue Zhu</a>, <br> <a href="">Hebei Li</a>, <a href="">Xiaoyan Sun</a> <br> University of Science and Technology of China <br> <b>ICCV 2025</b> </p> <div align="center"> <a href='https://arxiv.org/abs/2507.19141'><img src='https://img.shields.io/badge/Paper-arXiv-red'></a> <a href='https://github.com/chenj02/DASH/blob/main/LICENSE'><img src='https://img.shields.io/badge/License-MIT-green'></a> <br> <br> </div> <p align="center"> <img src="assets/framework.png" width="100%"/> </p>Quick Start
Dataset Preparation
To train DASH, you should download the following dataset:
- Neural 3D Video Dataset
- Technicolor dataset
We follows 4D-GS for preprocessing the Neural 3D Video dataset, and STGS for the Technicolor dataset. Thanks very much for their excellent work.
Installation
git clone https://github.com/chenj02/DASH.git
cd DASH
conda env create -f environment.yaml
conda activate DASH
pip install -e ./submodules/diff=gaussian-rasterization
pip install -e ./submodules/simple-knn
Training
bash train.sh
or
CUDA_VISIBLE_DEVICES=0 python train.py -s <input path> \
--model_path <output path> \
--conf <config path> \
--resolution 1 # for Technicolor dataset
Render
bash render.sh
or
CUDA_VISIBLE_DEVICES=0 python render.py -s <input path> \
--skip_train \
--model_path <output path> \
--conf <config path> \
--resolution 1 # for Technicolor dataset
Evaluation
python metrics.py -m <output path>
Citation
If you find our work useful, please cite:
@inproceedings{chen2025dash,
title={DASH: Self-Supervised Decomposition and 4D Hash Encoding for Real-Time Dynamic Scene Rendering},
author={Chen, Jie and Hu, Zhangchi and Wu, Peixi and Zhu, Huyue and Li, Hebei and Sun, Xiaoyan},
booktitle = {International Conference on Computer Vision (ICCV)},
year={2025}
}
Acknowledgements
Our code is based on 4D-GS and Grid4D. We thank the authors for their excellent work!
Related Skills
node-connect
349.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
