MILo
[SIGGRAPH Asia 2025 - TOG] Official implementation of MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction
Install / Use
/learn @Anttwo/MILoREADME
| <a href="https://anttwo.github.io/milo">Webpage</a> | <a href="https://arxiv.org/abs/2506.24096">arXiv</a> | <a href="https://www.youtube.com/watch?v=rOBs2yyYaJM">Presentation video</a> | <a href="https://drive.google.com/drive/folders/1Bf7DM2DFtQe4J63bEFLceEycNf4qTcqm?usp=sharing">Data</a> |

Abstract
Our method introduces a novel differentiable mesh extraction framework that operates during the optimization of 3D Gaussian Splatting representations. At every training iteration, we differentiably extract a mesh—including both vertex locations and connectivity—only from Gaussian parameters. This enables gradient flow from the mesh to Gaussians, allowing us to promote bidirectional consistency between volumetric (Gaussians) and surface (extracted mesh) representations. This approach guides Gaussians toward configurations better suited for surface reconstruction, resulting in higher quality meshes with significantly fewer vertices. Our framework can be plugged into any Gaussian splatting representation, increasing performance while generating an order of magnitude fewer mesh vertices. MILo makes the reconstructions more practical for downstream applications like physics simulations and animation.
To-do List
- ⬛ Implement a simple training viewer using the <a href="https://github.com/graphdeco-inria/graphdecoviewer">GraphDeco viewer</a>.
- ⬛ Add the mesh-based rendering evaluation scripts in
./milo/eval/mesh_nvs. - ✅ Add DTU training and evaluation scripts.
- ✅ Add low-res and very-low-res training for light output meshes (under 50MB and under 20MB).
- ✅ Add T&T evaluation scripts in
./milo/eval/tnt/. - ✅ Add Blender add-on (for mesh-based editing and animation) to the repo.
- ✅ Clean code.
- ✅ Basic refacto.
License
<details> <summary>Click here to see content.</summary><br>This project builds on existing open-source implementations of various projects cited in the Acknowledgements section.
Specifically, it builds on the original implementation of 3D Gaussian Splatting; As a result, parts of this code are licensed under the Gaussian-Splatting License (see ./LICENSE.md).
This codebase also builds on various other repositories such as Nvdiffrast; Please refer to the license files of the submodules for more details.
</details>0. Quickstart
<details> <summary>Click here to see content.</summary>Please start by creating or downloading a COLMAP dataset, such as <a href="https://drive.google.com/drive/folders/1Bf7DM2DFtQe4J63bEFLceEycNf4qTcqm?usp=sharing">our COLMAP run for the Ignatius scene from the Tanks&Temples dataset</a>. You can move the Ignatius directory to ./milo/data.
After installing MILo as described in the next section, you can reconstruct a surface mesh from images by going to the ./milo/ directory and running the following commands:
# Training for an outdoor scene
python train.py -s ./data/Ignatius -m ./output/Ignatius --imp_metric outdoor --rasterizer radegs
# Saves mesh as PLY with vertex colors after training
python mesh_extract_sdf.py -s ./data/Ignatius -m ./output/Ignatius --rasterizer radegs
Please change --imp_metric outdoor to --imp_metric indoor if your scene is indoor.
These commands use the lightest version of our approach, resulting in a small number of Gaussians and a light mesh. You can increase the number of Gaussians by adding --dense_gaussians, and improve the robustness to exposure variations with --decoupled_appearance as follows:
# Training with dense gaussians and better appearance model
python train.py -s ./data/Ignatius -m ./output/Ignatius --imp_metric outdoor --rasterizer radegs --dense_gaussians --decoupled_appearance
# Saves mesh as PLY with vertex colors after training
python mesh_extract_sdf.py -s ./data/Ignatius -m ./output/Ignatius --rasterizer radegs
Please refer to the following sections for additional details on our training and mesh extraction scripts, including:
- How to use other rasterizers
- How to train MILo with high-resolution meshes
- Various mesh extraction methods
- How to easily integrate MILo's differentiable GS-to-mesh pipeline to your own GS project
1. Installation
<details> <summary>Click here to see content.</summary>Clone this repository.
git clone https://github.com/Anttwo/MILo.git --recursive
Install dependencies.
Please start by creating an environment:
conda create -n milo python=3.9
conda activate milo
Then, specify your own CUDA paths depending on your CUDA version:
# You can specify your own cuda path (depending on your CUDA version)
export CPATH=/usr/local/cuda-11.8/targets/x86_64-linux/include:$CPATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.8/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-11.8/bin:$PATH
Finally, you can run the following script to install all dependencies, including PyTorch and Gaussian Splatting submodules:
python install.py
By default, the environment will be installed for CUDA 11.8. If using CUDA 12.1, you can provide the argument --cuda_version 12.1 to install.py. Please note that only CUDA 11.8 has been tested.
If you encounter problems or if the installation script does not work, please follow the detailed installation steps below.
<details> <summary>Click here for detailed installation instructions</summary># For CUDA 11.8
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=11.8 mkl=2023.1.0 -c pytorch -c nvidia
# For CUDA 12.1 (The code has only been tested on CUDA 11.8)
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 mkl=2023.1.0 -c pytorch -c nvidia
pip install -r requirements.txt
# Install submodules for Gaussian Splatting, including different rasterizers, aggressive densification, simplification, and utilities
pip install submodules/diff-gaussian-rasterization_ms
pip install submodules/diff-gaussian-rasterization
pip install submodules/diff-gaussian-rasterization_gof
pip install submodules/simple-knn
pip install submodules/fused-ssim
# Delaunay Triangulation from Tetra-Nerf
cd submodules/tetra_triangulation
conda install cmake
conda install conda-forge::gmp
conda install conda-forge::cgal
# You can specify your own cuda path (depending on your CUDA version)
export CPATH=/usr/local/cuda-11.8/targets/x86_64-linux/include:$CPATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.8/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-11.8/bin:$PATH
cmake .
make
pip install -e .
cd ../../
# Nvdiffrast for efficient mesh rasterization
cd ./submodules/nvdiffrast
pip install .
cd ../../
</details>
</details>
2. Training with MILo
<details> <summary>Click here to see content.</summary>First, go to the MILo folder:
cd milo
Then, to optimize a Gaussian Splatting representation with MILo using a COLMAP dataset, you can run the following command:
python train.py \
-s <PATH TO COLMAP DATASET> \
-m <OUTPUT_DIR> \
--imp_metric <"indoor" OR "outdoor"> \
--rasterizer <"radegs" OR "gof">
The main arguments are the following:
| Argument | Values | Default | Description |
|----------|--------|---------|-------------|
| --imp_metric | "indoor" or "outdoor" | Required | Type of scene to optimize. Modifies the importance sampling to better handle indoor or outdoor scenes. |
| --rasterizer | "radegs" or "gof" | "radegs" | Rasterization technique used during training. |
| --dense_gaussians | flag | disabled | Use more Gaussians during training. When active, only a subset of Gaussians will generate pivots for Delaunay triangulation. When inactive, all Gaussians generate pivots.|
You can use a dense set of Gaussians by adding the argument --dense_gaussians:
python train.py \
-s <PATH TO COLMAP DATASET> \
-m <OUTPUT_DIR> \
--imp_metric <"indoor" OR "outdoor"> \
--rasterizer <"radegs" OR "gof"> \
--dense_gaussians \
--data_device cpu
The list of optional arguments is provided below:
| Category | Argument | Values | Default | Description |
|----------|----------|---------|---------|-------------|
| Performance & Logging | --data_device | "cpu" or "cuda" | "cuda" | Forces data to be loaded on CPU (less GPU memory usage, slightly slower training) |
| | --log_interval | integer | - | Log images every N training iterations (e.g., 200) |
| Mesh Configuration | --mesh_config | "default", "highres", "veryhighres", "lowres", "verylowres" | "default" | Config file for mesh resolution and quality |
| Evaluation & Appearance | --eval | flag | disabled | Performs the usual train/test split for evaluation |
| | --decoupled_appearance | flag | dis
