QGS
[ICCV'25] Quadratic Gaussian Splatting: High Quality Surface Reconstruction with Second-order Geometric Primitives.
Install / Use
/learn @will-zzy/QGSREADME
Quadratic Gaussian Splatting: High Quality Surface Reconstruction with Second-order Geometric Primitives
$$ \color{Pink}{\Huge{\textsf{ICCV\ 2025}}} $$
Project page | Paper | Quadric Surfel Rasterizer (python)
<div style="text-align: center;"> <figure style="margin: 0;"> <img src="assets/teaser2.png" alt="Image 1" width="1000"> </figure> </div>This repo contains the official implementation for the paper “Quadratic Gaussian Splatting for Efficient and Detailed Surface Reconstruction.” Following 2DGS, we also provide a Python demo that demonstrates the differentiable rasterization process for quadratic surfaces:
<div style="display: flex; gap: 10px;"> <figure style="margin: 0;"> <img src="assets/QGS_demo_convex.gif" alt="Image 1" width="250"> </figure> <figure style="margin: 0;"> <img src="assets/QGS_demo_saddle1.gif" alt="Image 2" width="250"> </figure> <figure style="margin: 0;"> <img src="assets/QGS_demo_saddle2.gif" alt="Image 3" width="250"> </figure> </div>New Feature
- 2025/08/11: We replaced the original rectangular bounding box with a more compact truncated cone-shaped bounding box, which significantly reduces the invalid rendering area and achieves a two-fold speedup.
| | Mip-NeRF 360 | |TNT | | |:-----------|:-------------:|:-----:|:-------------:|:-----:| | | Training time | FPS | Training time | FPS | | 2DGS | 1h5min | 15.34 | 34min | 31.66 | | QGS | 1h48min | 7.61 | 2h | 13.27 | | QGS w/ TB | 1h13min | 14.15 | 43min | 25.36 |
</div>Installation
# download
git clone https://github.com/will-zzy/QGS.git
conda env create -f environment.yml
Training
To train a scene, simply use
python train.py --conf_path ./config/base.yaml # or DTU.yaml/TNT.yaml
In base.yaml, you can adjust all configurable parameters, with most parameters remaining consistent with 2DGS. Furthermore, we have briefly experimented with curvature-related losses, such as curvature distortion loss and curvature flatten loss. Unfortunately, their performance was not satisfactory.
You need to modify the root_dir in the xxx.yaml file to point to your dataset directory, for example:
xxx.yaml
└── root_dir
├── images
└── sparse/0
Tips for adjusting the parameters on your own dataset:
- We observed that setting
pipeline.depth_ratio=1enhances rendering quality. Additionally, by employing per-pixel reordering, we effectively eliminate the "disk-aliasing" artifacts present in 2DGS when usingdepth_ratio=1. Therefore, we recommend usingpipeline.depth_ratio=1when aiming to improve rendering quality. - In most scenarios, we recommend adjusting the
optimizer.densify_grad_thresholdandoptimizer.lambda_distparameters to achieve better reconstruction. The former controls the number of Gaussian primitives, while the latter controls the compactness of the primitives. - For large scenes, especially aerial or street views, we suggest adjusting the number of training iterations based on the number of images. We provide
TNT_Courthouse.yamlas an example.
Testing
To extract scene geometry, simply use:
python render.py --conf_path ./config/base.yaml
In the pipeline section of the base.yaml configuration file, you can set various parameters for mesh extraction, maintaining the same meanings as those in 2DGS.
Evaluation
For geometry reconstruction on the DTU dataset, please download the preprocessed data from Drive or Hugging Face. You also need to download the ground truth DTU point cloud.
Next, modify the DTU.yaml configuration file by setting the load_model_path to the path of your trained model and dataset_GT_path to the path of the ground truth dataset. After making these changes, simply execute the following commands to perform the evaluation:
python scripts/eval_dtu/eval.py --conf_path ./config/DTU.yaml
For the TNT dataset, please download the preprocessed TNT_data. Additionally, you need to download the ground truth TNT_GT.
Next, modify the TNT.yaml configuration file by setting load_model_path to the path of your trained model and dataset_GT_path to the path of the ground truth dataset. After making these changes, simply execute the following commands to perform the evaluation:
python scripts/eval_tnt/run.py --conf_path ./config/TNT.yaml -m <path to pre-trained model>
We also provide DTU Evaluation Results
<details> <summary><span style="font-weight: bold;">Table Results</span></summary>Chamfer distance on DTU dataset (lower is better)
| | 24 | 37 | 40 | 55 | 63 | 65 | 69 | 83 | 97 | 105 | 106 | 110 | 114 | 118 | 122 | Mean | |-----------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------| | Paper | 0.38 | 0.62 | 0.37 | 0.38 | 0.75 | 0.55 | 0.51 | 1.12 | 0.68 | 0.61 | 0.46 | 0.58 | 0.35 | 0.41 | 0.40 | 0.545 | | Reproduce | 0.38 | 0.64 | 0.35 | 0.34 | 0.77 | 0.55 | 0.52 | 1.11 | 0.68 | 0.60 | 0.43 | 0.58 | 0.35 | 0.41 | 0.37 | 0.539 |
</details> <details> <summary><span style="font-weight: bold;">Table Results</span></summary>F1 scores on TnT dataset (higher is better)
| | Barn | Caterpillar | Ignatius | Truck | Meetingroom | Courthouse | Mean | |-----------|-------|-------------|----------|--------|-------------|------------|------| | paper | 0.55 | 0.40 | 0.81 | 0.64 | 0.31 | 0.28 | 0.50 | | Reproduce | 0.56 | 0.41 | 0.80 | 0.67 | 0.39 | 0.27 | 0.52 |
</details>Acknowledgements
This project is built upon 2DGS. The TSDF fusion for extracting mesh is based on Open3D. The rendering script for MipNeRF360 is adopted from Multinerf, while the evaluation scripts for DTU and Tanks and Temples dataset are taken from DTUeval-python and TanksAndTemples, respectively. We thank all the authors for their great repos.
Citation
If you find our code or paper helps, please consider citing:
@inproceedings{zhang2025quadraticgaussiansplattinghigh,
author = {Ziyu Zhang and Binbin Huang and Hanqing Jiang and Liyang Zhou and Xiaojun Xiang and Shunhan Shen},
title = {{Quadratic Gaussian Splatting: High Quality Surface Reconstruction with Second-order Geometric Primitives}},
booktitle = {IEEE International Conference on Computer Vision (ICCV)},
year = {2025},
}
Related Skills
node-connect
354.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
112.3kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
354.3kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
354.3kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
