WildLight
official implementation of our CVPR 2023 paper "In-the-wild Inverse Rendering with a Flashlight"
Install / Use
/learn @za-cheng/WildLightREADME
WildLight: In-the-wild Inverse Rendering with a Flashlight (CVPR 2023)

[Project page]|[Arxiv]
Dependencies
Conda is recommended for installing all dependencies
conda env create -f environ.yaml
conda activate wildlight
Data convention
Input data is organized in a single folder, where images are saved as exr/png files similar to NeuS style, OR packed within a single npy file
<case_name xxx>
|-- cameras_sphere.npz # camera & lighting parameters
|-- images.npy
Or
|-- image
|-- 000.exr # target image for each view, either in exr or png format
|-- 001.exr
Or
|-- 000.png
|-- 001.png
...
|-- mask [optional]
|-- 000.png # target mask each view, if available
|-- 001.png
...
Camera and lighting parameters are stored in cameras_sphere.npz with following key strings:
world_mat_x: $K_x[R_x|T_x]$ projection matrix from world coordinates to image coordinatesscale_mat_x: Sim(3) transformation matrix from object coordinates to world coordinates; we will only recover shape & material inside a unit sphere ROI in object coordinates. Usually this matrix is static accross all views.light_energy_x: an RGB vector for flashlight intensity per view. If using a fixed power flashlight, this is set to $(1,1,1)$ for images under flashlight, or to $(0,0,0)$ for images without flashlight.max_intensity: [optional] a scalar indicating maximum pixel density (e.g. 255 for 8-bit images), defaults to inf
Config
Model and traning parameters are written into config files under confs/*.conf. We provide three configurations for our datasets: confs/synthetic.conf and confs/synthetic_maskless.conf for our synthetic data, and confs/real.conf for real data.
Running
- Train. Run following line to download and train on the synthetic
legocarobject dataset. We provide a total of 7 objects:bunny,armadillo,legocar,plant(synthetic w/ ground turth) andbulldozer,cokecanandface(real scene, images only).
Intermidiate results can be found underpython exp_runner.py --case legocar --conf confs/synthetic.conf --mode train --download_datasetexp/legocar/masked/folder. - Mesh and texture export.
This will export a UV-unwraped OBJ file along with PBR texture maps from last checkpoint, underpython exp_runner.py --case legocar --conf confs/synthetic.conf --mode validate_mesh --is_continueexp/legocar/masked/meshes/XXXXXXXX_export(this might take a few minutes. - Validate novel view rendering. A
dataset_valmust be provided in config.
Results will be saved topython exp_runner.py --case legocar --conf confs/synthetic.conf --mode validate_image --is_continueexp/legocar/masked/novel_view/.
Results (rendered in blender)
https://user-images.githubusercontent.com/57708879/232410072-43d74df8-9438-4fc8-b302-0cd2c7f659ed.mp4
Acknowledgement
This repo is heavily built upon NeuS. We would like to thank the authors for opening source. Special thanks goes to @wei-mao-2019, a friend and fellow researcher who agreed to appear in our dataset.
BibTex
@article{cheng2023wildlight,
title={WildLight: In-the-wild Inverse Rendering with a Flashlight},
author={Cheng, Ziang and Li, Junxuan and Li, Hongdong},
journal={arXiv preprint arXiv:2303.14190},
year={2023}
}
Related Skills
node-connect
333.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
82.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
333.7kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
82.0kCommit, push, and open a PR
