SkillAgentSearch skills...

DPI

[Sigasia 2023 TOG]Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering

Install / Use

/learn @LinjieLyu/DPI
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering

Pytorch implementation of Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering .

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), 2023

Linjie Lyu<sup>1</sup>, Ayush Tewari<sup>2</sup>, Marc Habermann<sup>1</sup>, Shunsuke Saito<sup>3</sup>, Michael Zollhöfer<sup>3</sup>, Thomas Leimkühler<sup>1</sup>, and Christian Theobalt<sup>1</sup>

<sup>1</sup>Max Planck Institute for Informatics,Saarland Informatics Campus , <sup>2</sup>MIT CSAIL,<sup>3</sup>Reality Labs Research

image info

Installation

pip install -r requirements.txt
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install mitsuba

Data Preparation

See hotdog for references.

image/
    0.exr or .png
    1.exr or .png
    ...

scene.xml
camera.xml

How to prepare your Mitsuba file?

Geometry

For real-world scenes, you can use a neural SDF reconstruction method to extract the mesh for Mitsuba xml file.

Camera

Some camera-reader code is provided in camera. You can always load cameras to Blender and then export a camera.xml with the useful Mitsuba Blender Add-on.

For more details take a look at Mitsuba 3 document.

Training DDPM Model

mkdir models

We use Laval and Streetlearn as environment map datasets. Refer to guided-diffusion for training details or download pre-trained checkpoints to ./models/.

Run Optimization

Here is an example to sample realistic outdoor environment maps take hotdog as input.

Environment Map Sampling:

python sample_condition.py \
--model_config=configs/model_config_outdoor.yaml \
--diffusion_config=configs/diffusion_config.yaml \
--task_config=raytracing_config_outdoor.yaml;

Material Refinement:

material_optimization.py --task_config=configs/raytracing_config_outdoor.yaml; 

image info

For indoor scenes, use indoor configs. Usually the illumi_scale hyperparameter for indoor config is 1.0 - 10.0.

Differentiable Renderer Plug-in

If you want to generate natural environment maps with another differentiable rendering method instead of Mitsuba3, it's easy. Just replace the rendering (forward) and update_material functions in ./guided_diffusion/measurements.py.

Citation

@article{lyu2023dpi,
title={Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering},
author={Lyu, Linjie and Tewari, Ayush and Habermann, Marc and Saito, Shunsuke and Zollh{\"o}fer, Michael and Leimk{\"u}ehler, Thomas and Theobalt, Christian},
journal={ACM Transactions on Graphics},
volume={42},
number={6},
year={2023}
}

Acknowledgments

This code is based on the DPS, and guided-diffusion codebases.

Related Skills

View on GitHub
GitHub Stars56
CategoryDevelopment
Updated4mo ago
Forks2

Languages

Python

Security Score

77/100

Audited on Nov 5, 2025

No findings