SkillAgentSearch skills...

PVG

[IJCV 2026] Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering

Install / Use

/learn @fudan-zvg/PVG
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering

[Project] [Paper]

Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering,
Yurui Chen, Chun Gu, Junzhe Jiang, Xiatian Zhu, Li Zhang
IJCV 2026

Official implementation of "Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering".

🛠️ Pipeline

<div align="center"> <img src="assets/pipeline.png"/> </div><br/>

Get started

Environment

# Clone the repo.
git clone https://github.com/fudan-zvg/PVG.git
cd PVG

# Make a conda environment.
conda create --name pvg python=3.9
conda activate pvg

# Install requirements.
pip install -r requirements.txt

# Install simple-knn
git clone https://gitlab.inria.fr/bkerbl/simple-knn.git
pip install ./simple-knn

# a modified gaussian splatting (for feature rendering)
git clone --recursive https://github.com/SuLvXiangXin/diff-gaussian-rasterization
pip install ./diff-gaussian-rasterization

# Install nvdiffrast (for Envlight)
git clone https://github.com/NVlabs/nvdiffrast
pip install ./nvdiffrast

Data preparation

Create a directory for the data: mkdir data.

Waymo dataset

Preprocessed 4 waymo scenes for results in Table 1 of our paper can be downloaded here (optional: corresponding label). Please unzip and put it into data directory.

First prepare the kitti-format Waymo dataset:

# Given the following dataset, we convert it to kitti-format
# data
# └── waymo
#     └── waymo_format
#         └── training
#             └── segment-xxxxxx

# install some optional package
pip install -r requirements-data.txt 

# Convert the waymo dataset to kitti-format
python scripts/waymo_converter.py waymo --root-path ./data/waymo/ --out-dir ./data/waymo/ --workers 128 --extra-tag waymo

Then use the example script scripts/extract_scenes_waymo.py to extract the scenes from the kitti-format Waymo dataset which we employ to extract the scenes listed in StreetSurf.

Following StreetSurf, we use Segformer to extract the sky mask and put them as follows:

data
└── waymo_scenes
    └── sequence_id
        ├── calib
        │   └── frame_id.txt
        ├── image_0{0, 1, 2, 3, 4}
        │   └── frame_id.png
        ├── sky_0{0, 1, 2, 3, 4}
        │   └── frame_id.png
        |── pose
        |   └── frame_id.txt
        └── velodyne
            └── frame_id.bin

We provide an example script scripts/extract_mask_waymo.py to extract the sky mask from the extracted Waymo dataset, follow instructions here to setup the Segformer environment.

KITTI dataset

Preprocessed 3 kitti scenes for results in Table 1 of our paper can be downloaded here. Please unzip and put it into data directory.

Put the KITTI-MOT dataset in data directory. Following StreetSurf, we use Segformer to extract the sky mask and put them as follows:

data
└── kitti_mot
    └── training
        ├── calib
        │   └── sequence_id.txt
        ├── image_0{2, 3}
        │   └── sequence_id
        │       └── frame_id.png
        ├── sky_0{2, 3}
        │   └── sequence_id
        │       └── frame_id.png
        |── oxts
        |   └── sequence_id.txt
        └── velodyne
            └── sequence_id
                └── frame_id.bin

We also provide an example script scripts/extract_mask_kitti.py to extract the sky mask from the KITTI dataset.

Training

# Waymo image reconstruction
CUDA_VISIBLE_DEVICES=0 python train.py \
--config configs/waymo_reconstruction.yaml \
source_path=data/waymo_scenes/0145050 \
model_path=eval_output/waymo_reconstruction/0145050

# Waymo novel view synthesis
CUDA_VISIBLE_DEVICES=0 python train.py \
--config configs/waymo_nvs.yaml \
source_path=data/waymo_scenes/0145050 \
model_path=eval_output/waymo_nvs/0145050

# KITTI image reconstruction
CUDA_VISIBLE_DEVICES=0 python train.py \
--config configs/kitti_reconstruction.yaml \
source_path=data/kitti_mot/training/image_02/0001 \
model_path=eval_output/kitti_reconstruction/0001 \
start_frame=380 end_frame=431

# KITTI novel view synthesis
CUDA_VISIBLE_DEVICES=0 python train.py \
--config configs/kitti_nvs.yaml \
source_path=data/kitti_mot/training/image_02/0001 \
model_path=eval_output/kitti_nvs/0001 \
start_frame=380 end_frame=431

After training, evaluation results can be found in {EXPERIMENT_DIR}/eval directory.

Evaluating

You can also use the following command to evaluate.

CUDA_VISIBLE_DEVICES=0 python evaluate.py \
--config configs/kitti_reconstruction.yaml \
source_path=data/kitti_mot/training/image_02/0001 \
model_path=eval_output/kitti_reconstruction/0001 \
start_frame=380 end_frame=431

Automatically removing the dynamics

You can the following command to automatically remove the dynamics, the render results will be saved in {EXPERIMENT_DIR}/separation directory.

CUDA_VISIBLE_DEVICES=1 python separate.py \
--config configs/waymo_reconstruction.yaml \
source_path=data/waymo_scenes/0158150 \
model_path=eval_output/waymo_reconstruction/0158150

🎥 Videos

🎞️ Demo

Demo Video

🎞️ Rendered RGB, Depth and Semantic

https://github.com/fudan-zvg/PVG/assets/83005605/60337a98-f92c-4465-ab45-2ee121413114

https://github.com/fudan-zvg/PVG/assets/83005605/f45c0a91-26b6-46d9-895c-bf13786f94d2

https://github.com/fudan-zvg/PVG/assets/83005605/0ed679d6-5e62-4923-b2cb-02c587ed468c

https://github.com/fudan-zvg/PVG/assets/83005605/3ffda292-1b73-43d3-916a-b524f143f0c9

🎞️ Image Reconstruction on Waymo

Comparison with static methods

https://github.com/fudan-zvg/PVG/assets/83005605/93e32945-7e9a-454a-8c31-5563125de95b

https://github.com/fudan-zvg/PVG/assets/83005605/f3c02e43-bb86-428d-b27b-73c4a7857bc7

Comparison with dynamic methods

https://github.com/fudan-zvg/PVG/assets/83005605/73a82171-9e78-416f-a770-f6f4239d80ca

https://github.com/fudan-zvg/PVG/assets/83005605/e579f8b8-d31e-456b-a943-b39d56073b94

🎞️ Novel View Synthesis on Waymo

https://github.com/fudan-zvg/PVG/assets/83005605/37393332-5d34-4bd0-8285-40bf938b849f

📜 BibTeX

@article{chen2026periodic,
  title={Periodic vibration gaussian: Dynamic urban scene reconstruction and real-time rendering},
  author={Chen, Yurui and Gu, Chun and Jiang, Junzhe and Zhu, Xiatian and Zhang, Li},
  journal={International Journal of Computer Vision},
  year={2026},
}
View on GitHub
GitHub Stars355
CategoryDevelopment
Updated2d ago
Forks17

Languages

Python

Security Score

95/100

Audited on Mar 26, 2026

No findings