3dgrut
Ray tracing and hybrid rasterization of Gaussian particles
Install / Use
/learn @nv-tlabs/3dgrutREADME
<p align="center"> <img width="100%" src="assets/nvidia-hq-playground.gif"> </p>
This repository provides the official implementations of 3D Gaussian Ray Tracing (3DGRT) and 3D Gaussian Unscented Transform (3DGUT). Unlike traditional methods that rely on splatting, 3DGRT performs ray tracing of volumetric Gaussian particles instead. This enables support for distorted cameras with complex, time-dependent effects such as rolling shutters, while also efficiently simulating secondary rays required for rendering phenomena like reflection, refraction, and shadows. However, 3DGRT requires dedicated ray-tracing hardware and remains slower than 3DGS.
To mitigate this limitation, we also propose 3DGUT, which enables support for distorted cameras with complex, time-dependent effects within a rasterization framework, maintaining the efficiency of rasterization methods. By aligning the rendering formulations of 3DGRT and 3DGUT, we introduce a hybrid approach called 3DGRUT. This technique allows for rendering primary rays via rasterization and secondary rays via ray tracing, combining the strengths of both methods for improved performance and flexibility.
For projects that require a fast, modular, and production-ready Gaussian Splatting framework, we recommend using gsplat, which also provides support for 3DGUT.
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes Nicolas Moenne-Loccoz*, Ashkan Mirzaei*, Or Perel, Riccardo De Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp^, Zan Gojcic^ (*,^ indicates equal contribution) SIGGRAPH Asia 2024 (Journal Track) Project page / Paper / Video / BibTeX
3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Qi Wu*, Janick Martinez Esturo*, Ashkan Mirzaei, Nicolas Moenne-Loccoz, Zan Gojcic (* indicates equal contribution) CVPR 2025 (Oral) Project page / Paper / Video / BibTeX
🔥 News
- ✅[2026/01] Physically-Plausible ISP support.
- ✅[2025/08] Support for the 3DGRT and 3DGS/3DGRT pipelines is now available with Vulkan API as part of the Vulkan Gaussian Splatting Project. 3DGUT will also be available soon.
- ✅[2025/07] Support for datasets with multiple sensors (only for COLMAP-style datasets).
- ✅[2025/07] Support for Windows has been added.
- ✅[2025/06] Playground supports PBR meshes and environment maps.
- ✅[2025/04] Support for image masks.
- ✅[2025/04] SparseAdam support.
- ✅[2025/04] MCMC densification strategy support.
- ✅[2025/04] Stable release v1.0.0 tagged.
- ✅[2025/03] Initial code release!
- ✅[2025/02] 3DGUT was accepted to CVPR 2025!
- ✅[2024/08] 3DGRT was accepted to SIGGRAPH Asia 2024!
Contents
- 🔥 News
- Contents
- 🔧 1 Dependencies and Installation
- 💻 2. Train 3DGRT or 3DGUT scenes
- 🎥 3. Rendering from Checkpoints
- 📋 4. Evaluations
- 🛝 5. Interactive Playground GUI
- 🎓 6. Citations
- 🙏 7. Acknowledgements
🔧 1 Dependencies and Installation
- CUDA 11.8+ Compatible System
- For good performance with 3DGRT, we recommend using an NVIDIA GPU with Ray Tracing (RT) cores.
- Currently, only Linux environments are supported by the included install script (Windows support coming soon!)
First, install gcc 11:
sudo apt-get install gcc-11 g++-11
Then run the install script with the optional WITH_GCC11 flag, which additionally configures the conda environment to use gcc-11:
./install_env.sh 3dgrut WITH_GCC11
</details>
<details>
<summary> NOTE: Blackwell GPU support</summary>
</br>
The current codebase uses CUDA 11.8, which is not compatible with the new Blackwell GPUs (e.g., RTX 5090) or GPUs with compute capability 10.0+.
An experimental build script has been kindly implemented by <a href="https://www.github.com/johnnynunez">@johnnynunez</a> to support Blackwell GPUs. To enable it:
Run the install script with both the WITH_GCC11 flag and a CUDA version. Currently, only CUDA 12.8.1 is supported:
CUDA_VERSION=12.8.1 ./install_env.sh 3dgrut_cuda12 WITH_GCC11
To build the docker image, you can use the following command:
docker build --build-arg CUDA_VERSION=12.8.1 -t 3dgrut:cuda128 .
</details>
</br>
To set up the environment using conda, first clone the repository and run ./install_env.sh script as:
git clone --recursive https://github.com/nv-tlabs/3dgrut.git
cd 3dgrut
# You can install each component step by step following install_env.sh
chmod +x install_env.sh
./install_env.sh 3dgrut
conda activate 3dgrut
On Windows, you can use the install_env.ps1 script to install the environment.
Running with Docker
Build the docker image:
git clone --recursive https://github.com/nv-tlabs/3dgrut.git
cd 3dgrut
docker build . -t 3dgrut
Run it:
xhost +local:root
docker run -v --rm -it --gpus=all --net=host --ipc=host -v $PWD:/workspace --runtime=nvidia -e DISPLAY 3dgrut
[!NOTE] Remember to set DISPLAY environment variable if you are running on a remote server from command line.
💻 2. Train 3DGRT or 3DGUT scenes
We provide different configurations for training using 3DGRT and 3DGUT models on common benchmark datasets. For example you can download NeRF Synthetic dataset, MipNeRF360 dataset or ScanNet++, and then run one of the following commands:
# Train Lego with 3DGRT & 3DGUT
python train.py --config-name apps/nerf_synthetic_3dgrt.yaml path=data/nerf_synthetic/lego out_dir=runs experiment_name=lego_3dgrt
python train.py --config-name apps/nerf_synthetic_3dgut.yaml path=data/nerf_synthetic/lego out_dir=runs experiment_name=lego_3dgut
# Train Bonsai
python train.py --config-name apps/colmap_3dgrt.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgrt dataset.downsample_factor=2
python train.py --config-name apps/colmap_3dgut.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgut dataset.downsample_factor=2
# Train Scannet++
python train.py --config-name apps/scannetpp_3dgrt.yaml path=data/scannetpp/0a5c013435/dslr out_dir=runs experiment_name=0a5c013435_3dgrt
python train.py --config-name apps/scannetpp_3dgut.yaml path=data/scannetpp/0a5c013435/dslr out_dir=runs experiment_name=0a5c013435_3dgut
We also support MCMC densification strategy and selective Adam optimizer for 3DGRT and 3DGUT.
To enable MCMC, use:
python train.py --config-name apps/colmap_3dgrt_mcmc.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgrt dataset.downsample_factor=2
python train.py --config-name apps/colmap_3dgut_mcmc.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgut dataset.downsample_factor=2
To enable selective Adam, use:
python train.py --config-name apps/colmap_3dgrt.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgrt dataset.downsample_factor=2 optimizer.type=selective_adam
python train.py --config-name apps/colmap_3dgut.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgut dataset.downsample_factor=2 optimizer.type=selective_adam
If you use MCMC and Selective Adam in your research, please cite 3dgs-mcmc, taming-3dgs, and gSplat library from which the code was adopted (links to the code are provided in the source files).
[!Note] For ScanNet++, we expect th
Related Skills
node-connect
326.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
80.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
326.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
80.4kCommit, push, and open a PR
