Marigold
[CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
Install / Use
/learn @prs-eth/MarigoldREADME
Marigold Computer Vision
This project implements Marigold, a Computer Vision method for estimating image characteristics. Initially proposed for extracting high-resolution depth maps in our CVPR 2024 paper "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation", we extended the method to other modalities as described in our follow-up paper "Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis".
Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis
Team: Bingxin Ke, Kevin Qu, Tianfu Wang Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler
We present Marigold, a family of conditional generative models and a fine-tuning protocol that extracts the knowledge from pretrained latent diffusion models like Stable Diffusion and adapts them for dense image analysis tasks, including monocular depth estimation, surface normal prediction, and intrinsic decomposition. Marigold requires minimal modification of the pre-trained latent diffusion model's architecture, trains with small synthetic datasets on a single GPU over a few days, and demonstrates state-of-the-art zero-shot generalization.

Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
In CVPR 2024 (Oral, Best Paper Award Candidate)<br> Team: Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler
We present Marigold, a diffusion model, and an associated fine-tuning protocol for monocular depth estimation. Its core principle is to leverage the rich visual knowledge stored in modern generative image models. Our model, derived from Stable Diffusion and fine-tuned with synthetic data, can zero-shot transfer to unseen data, offering state-of-the-art monocular depth estimation results.

📢 News
2025-05-15: Released code and a checkpoint of Marigold Intrinsic Image Decomposition predicting Albedo, diffuse Shading, and non-diffuse Residual (Marigold-IID-Lighting v1.1).<br>
2025-05-15: Released code and a checkpoint of Marigold Intrinsic Image Decomposition predicting Albedo, Roughness, and Metallicity (Marigold-IID-Appearance v1.1).<br>
2025-05-15: Released code and a checkpoint of Marigold Surface Normals Estimation (v1.1).<br>
2025-05-15: Released an updated checkpoint of Marigold Depth (v1.1), trained with updated noise scheduler settings (zero-SNR and trailing timestamps), and augmentations.<br>
2024-05-28: Training code is released.<br>
2024-05-27: Marigold pipelines are merged into the diffusers core starting v0.28.0 release!<br>
2024-03-23: Added a Latent Consistency Model (LCM) checkpoint.<br>
2024-03-04: The paper is accepted at CVPR 2024.<br>
2023-12-22: Contributed to Diffusers community pipeline.<br>
2023-12-19: Updated license to Apache License, Version 2.0.<br>
2023-12-08: Added the first interactive Hugging Face Space Demo of depth estimation.<br>
2023-12-05: Added a Google Colab<br>
2023-12-04: Added an arXiv paper and inference code (this repository).
🚀 Usage
We offer several ways to interact with Marigold:
-
A family of free online interactive demos: <a href="https://huggingface.co/spaces/prs-eth/marigold"><img src="https://img.shields.io/badge/🤗%20Depth-Demo-yellow" height="16"></a> <a href="https://huggingface.co/spaces/prs-eth/marigold-normals"><img src="https://img.shields.io/badge/🤗%20Normals-Demo-yellow" height="16"></a> <a href="https://huggingface.co/spaces/prs-eth/marigold-iid"><img src="https://img.shields.io/badge/🤗%20Image%20Intrinsics-Demo-yellow" height="16"></a> (kudos to the HF team for the GPU grants)
-
Marigold pipelines are part of <a href="https://huggingface.co/docs/diffusers/using-diffusers/marigold_usage"><img src="doc/badges/badge-hfdiffusers.svg" height="16"></a> - a one-stop shop for diffusion 🧨!
-
Run the demo locally (requires a GPU and an
nvidia-docker2, see Installation Guide):docker run -it -p 7860:7860 --platform=linux/amd64 --gpus all registry.hf.space/prs-eth-marigold:latest python app.py -
Extended demo on a Google Colab: <a href="https://colab.research.google.com/drive/12G8reD13DdpMie5ZQlaFNo2WCGeNUH-u?usp=sharing"><img src="doc/badges/badge-colab.svg" height="16"></a>
-
If you just want to see the examples, visit our gallery: <a href="https://marigoldcomputervision.github.io"><img src="doc/badges/badge-website.svg" height="16"></a>
-
Finally, local development instructions with this codebase are given below.
🛠️ Setup
The inference code was tested on:
- Ubuntu 22.04 LTS, Python 3.10.12, CUDA 11.7, GeForce RTX 3090 (pip)
🪧 A Note for Windows users
We recommend running the code in WSL2:
- Install WSL following installation guide.
- Install CUDA support for WSL following installation guide.
- Find your drives in
/mnt/<drive letter>/; check WSL FAQ for more details. Navigate to the working directory of choice.
📦 Repository
Clone the repository (requires git):
git clone https://github.com/prs-eth/Marigold.git
cd Marigold
💻 Dependencies
Install the dependencies:
python -m venv venv/marigold
source venv/marigold/bin/activate
pip install -r requirements.txt
Keep the environment activated before running the inference script. Activate the environment again after restarting the terminal session.
🏃 Testing on your images
📷 Prepare images
Use selected images from our paper:
bash script/download_sample_data.sh
Or place your images in a directory, for example, under input/in-the-wild_example, and run the following inference command.
🚀 Run inference (for practical usage)
# Depth
python script/depth/run.py \
--checkpoint prs-eth/marigold-depth-v1-1 \
--input_rgb_dir input/in-the-wild_example \
--output_dir output/in-the-wild_example \
--fp16
# Normals
python script/normals/run.py \
--checkpoint prs-eth/marigold-normals-v1-1 \
--input_rgb_dir input/in-the-wild_example \
--output_dir output/in-the-wild_example \
--fp16
# IID (appearance model)
python script/iid/run.py \
--checkpoint prs-eth/marigold-iid-appearance-v1-1 \
--input_rgb_dir input/in-the-wild_example \
--output_dir output/in-the-wild_example \
--fp16
# IID (lighting model)
python script/iid/run.py \
--checkpoint prs-eth/marigold-iid-lighting-v1-1 \
--input_rgb_dir input/in-the-wild_example \
--output_dir output/in-the-wild_example \
--fp16
⚙️ Inference settings
The default settings are op
Related Skills
docs-writer
98.9k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
334.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
2.8kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
docs
High-performance, modular RAG backend and "Knowledge Engine" Built with Go & Gin, featuring Git-Ops knowledge sync, pgvector semantic search, and OpenAI-compatible model support.
