MaterialPalette
[CVPR 2024] Official repository of "Material Palette: Extraction of Materials from a Single Real-world Image"
Install / Use
/learn @astra-vision/MaterialPaletteREADME
Material Palette: Extraction of Materials from a Single Image (CVPR 2024)
<div> <a href="https://wonjunior.github.io/">Ivan Lopes</a><sup>1</sup> <a href="https://fabvio.github.io/">Fabio Pizzati</a><sup>2</sup> <a href="https://team.inria.fr/rits/membres/raoul-de-charette/">Raoul de Charette</a><sup>1</sup> <br> <sup>1</sup> Inria, <sup>2</sup> Oxford Uni. </div> <br> <!--[](https://arxiv.org/abs/2311.17060)--><b>TL;DR,</b> Material Palette extracts a palette of PBR materials - <br>albedo, normals, and roughness - from a single real-world image.
</div>https://github.com/astra-vision/MaterialPalette/assets/30524163/44e45e58-7c7d-49a3-8b6e-ec6b99cf9c62
<!--ts-->- Overview
- 1. Installation
- 2. Quick Start
- 3. Project Structure
- 4. (optional) Retraining
- Acknowledgments
- Licence
Overview
This is the official repository of Material Palette. In a nutshell, the method works in three stages: first, concepts are extracted from an input image based on a user-provided mask; then, those concepts are used to generate texture images; finally, the generations are decomposed into SVBRDF maps (albedo, normals, and roughness). Visit our project page or consult our paper for more details!
Content: This repository allows the extraction of texture concepts from image and region mask sets. It also allows generation at different resolutions. Finally, it proposes a decomposition step thanks to our decomposition model, for which we share the training weights.
[!TIP] We propose a "Quick Start" section: before diving straight into the full pipeline, we share four pretrained concepts ⚡ so you can go ahead and experiment with the texture generation step of the method: see "§ Generation". Then you can try out the full method with your own image and masks = concept learning + generation + decomposition, see "§ Complete Pipeline".
1. Installation
-
Download the source code with git
git clone https://github.com/astra-vision/MaterialPalette.gitThe repo can also be downloaded as a zip here.
-
Create a conda environment with the dependencies.
conda env create --verbose -f deps.ymlThis repo was tested with Python 3.10.8, PyTorch 1.13, diffusers 0.19.3, peft 0.5, and PyTorch Lightning 1.8.3.
-
Load the conda environment:
conda activate matpal -
If you are looking to perform decomposition, download our pre-trained model and untar the archive:
wget https://github.com/astra-vision/MaterialPalette/releases/download/weights/model.tar.gz<sup>This is not required if you are only looking to perform texture extraction</sup>
2. Quick start
Here are instructions to get you started using Material Palette. First, we provide some optimized concepts so you can experiment with the generation pipeline. We also show how to run the method on user-selected images and masks (concept learning + generation + decomposition)
§ Generation
| Input image | 1K | 2K | 4K | 8K | ⬇️ LoRA ~8Kb
| :-: | :-: | :-: | :-: | :-: | :-: |
| <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/ba3126d7-ce54-4895-8d59-93f1fd22e7d6" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/e1ec9c9e-d618-4314-82a3-2ac2432af668" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/d960a216-5558-4375-9bf2-5a648221aa55" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/45ad2ca9-8be7-48ba-b368-5528ae021627" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/c9140b16-a59f-4898-b49f-5c3635a3ea85" alt="J" width="100"/> |
| <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/f5838959-aeeb-417a-8030-0fab5e39443b" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/4b756fae-3ea6-4d40-b4e6-0a8c50674e14" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/91aefd19-0985-4b84-81a2-152eb16b87e0" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/c9547e54-7bac-4f3d-8d94-acafd61847d9" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/069d639b-71bc-4f67-a735-a3b44d7bc683" alt="J" width="100"/> |
| <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/b16bc25f-e5c5-45ad-bf3b-ef28cb57ed30" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/0ae31915-7bc5-4177-8b84-6988cccc2c24" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/e501c66d-a5b7-42e4-9ec2-0a12898280ed" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/290b685a-554c-4c62-ab0d-9d66a2945f09" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/378be48d-61e5-4a8a-b2cd-1002aec541bf" alt="J" width="100"/> |
| <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/3c69d0c0-d91a-4d19-b0c0-b9dceb4477cf" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/ec6c62ea-00f7-4284-8cc3-6604159a3b5f" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/26c6ad3d-2306-4ad3-97a7-6713d5f4e5ee" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/94f7caa1-3ade-4b62-b0c6-b758a3a05d3f" alt="J" width="100"/> | <img src="https://github.com/astra-vision/MaterialPalette/assets/30524163/36630e65-9a2f-4a77-bb1b-0214d5f1b6f9" alt="J" width="100"/> |
<sup>All generations were downscaled for memory constraints.</sup>
Go ahead and download one of the above LoRA concept checkpoints, example for "blue_tiles":
wget https://github.com/astra-vision/MaterialPalette/files/14601640/blue_tiles.zip;
unzip blue_tiles.zip
To generate from a checkpoint, use the concept module either via the command line interface or the functional interface in python:
python concept/infer.py path/to/LoRA/checkpointimport concept concept.infer(path_to_LoRA_checkpoint)
Results will be placed relative to the checkpoint directory in a outputs folder.
You have control over the following parameters:
stitch_mode: concatenation, average, or weighted average (default);resolution: the output resolution of the generated texture;prompt: one of the four prompt templates:"p1":"top view realistic texture of S*","p2":"top view realistic S* texture","p3":"high resolution realistic S* texture in top view","p4":"realistic S* texture in top view";
seed: inference seed when sampling noise;renorm: whether or not to renormalize the generated samples generations based on input image (this option can only be used when called from inside the pipeline, ie. when the input image is available);num_inference_steps: number of denoising steps.
<sup>A complete list of parameters can be viewed with `python concept/infer.py --he
