DepthCrafter
[CVPR 2025 Highlight] DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
Install / Use
/learn @Tencent/DepthCrafterREADME
DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
<div align="center"> <img src='https://depthcrafter.github.io/img/logo.png' style="height:140px"></img>
<a href='https://arxiv.org/abs/2409.02095'><img src='https://img.shields.io/badge/arXiv-2409.02095-b31b1b.svg'></a>
<a href='https://depthcrafter.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<a href='https://huggingface.co/spaces/tencent/DepthCrafter'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue'></a>
Wenbo Hu<sup>1* †</sup>, Xiangjun Gao<sup>2*</sup>, Xiaoyu Li<sup>1* †</sup>, Sijie Zhao<sup>1</sup>, Xiaodong Cun<sup>1</sup>, <br> Yong Zhang<sup>1</sup>, Long Quan<sup>2</sup>, Ying Shan<sup>3, 1</sup> <br><br> <sup>1</sup>Tencent AI Lab <sup>2</sup>The Hong Kong University of Science and Technology <sup>3</sup>ARC Lab, Tencent PCG
CVPR 2025, Highlight
</div>🔆 Notice
DepthCrafter is still under active development!
We recommend that everyone use English to communicate on issues, as this helps developers from around the world discuss, share experiences, and answer questions together.
For business licensing and other related inquiries, don't hesitate to contact wbhu@tencent.com.
🔆 Introduction
🤗 If you find DepthCrafter useful, please help ⭐ this repo, which is important to Open-Source projects. Thanks!
🔥 DepthCrafter can generate temporally consistent long-depth sequences with fine-grained details for open-world videos, without requiring additional information such as camera poses or optical flow.
[25-12-01]Refactored the codebase for better usability and extensibility.[25-04-05]🔥🔥🔥 Its upgraded work, GeometryCrafter, is released now, for video to point cloud![25-04-05]🎉🎉🎉 DepthCrafter is selected as Highlight in CVPR‘25.[24-12-10]🌟🌟🌟 EXR output format is supported now, with --save_exr option.[24-11-26]🚀🚀🚀 DepthCrafter v1.0.1 is released now, with improved quality and speed[24-10-19]🤗🤗🤗 DepthCrafter now has been integrated into ComfyUI![24-10-08]🤗🤗🤗 DepthCrafter now has been integrated into Nuke, have a try![24-09-28]Add full dataset inference and evaluation scripts for better comparison use. :-)[24-09-25]🤗🤗🤗 Add huggingface online demo DepthCrafter.[24-09-19]Add scripts for preparing benchmark datasets.[24-09-18]Add point cloud sequence visualization.[24-09-14]🔥🔥🔥 DepthCrafter is released now, have fun!
📦 Release Notes
- DepthCrafter v1.0.1:
- Quality and speed improvement <table> <thead> <tr> <th>Method</th> <th>ms/frame↓ @1024×576 </th> <th colspan="2">Sintel (~50 frames)</th> <th colspan="2">Scannet (90 frames)</th> <th colspan="2">KITTI (110 frames)</th> <th colspan="2">Bonn (110 frames)</th> </tr> <tr> <th></th> <th></th> <th>AbsRel↓</th> <th>δ₁ ↑</th> <th>AbsRel↓</th> <th>δ₁ ↑</th> <th>AbsRel↓</th> <th>δ₁ ↑</th> <th>AbsRel↓</th> <th>δ₁ ↑</th> </tr> </thead> <tbody> <tr> <td>Marigold</td> <td>1070.29</td> <td>0.532</td> <td>0.515</td> <td>0.166</td> <td>0.769</td> <td>0.149</td> <td>0.796</td> <td>0.091</td> <td>0.931</td> </tr> <tr> <td>Depth-Anything-V2</td> <td><strong>180.46</strong></td> <td>0.367</td> <td>0.554</td> <td>0.135</td> <td>0.822</td> <td>0.140</td> <td>0.804</td> <td>0.106</td> <td>0.921</td> </tr> <tr> <td>DepthCrafter previous</td> <td>1913.92</td> <td><u>0.292</u></td> <td><strong>0.697</strong></td> <td><u>0.125</u></td> <td><u>0.848</u></td> <td><u>0.110</u></td> <td><u>0.881</u></td> <td><u>0.075</u></td> <td><u>0.971</u></td> </tr> <tr> <td>DepthCrafter v1.0.1</td> <td><u>465.84</u></td> <td><strong>0.270</strong></td> <td><strong>0.697</strong></td> <td><strong>0.123</strong></td> <td><strong>0.856</strong></td> <td><strong>0.104</strong></td> <td><strong>0.896</strong></td> <td><strong>0.071</strong></td> <td><strong>0.972</strong></td> </tr> </tbody> </table>
🎥 Visualization
We provide demos of unprojected point cloud sequences, with reference RGB and estimated depth videos. For more details, please refer to our project page.
https://github.com/user-attachments/assets/62141cc8-04d0-458f-9558-fe50bc04cc21
🚀 Quick Start
🤖 Gradio Demo
- Online demo: DepthCrafter
- Local demo:
gradio app.py
🌟 Community Support
- NukeDepthCrafter: a plugin allows you to generate temporally consistent Depth sequences inside Nuke, which is widely used in the VFX industry.
- ComfyUI-Nodes: creating consistent depth maps for your videos using DepthCrafter in ComfyUI.
🛠️ Installation
- Clone this repo:
git clone https://github.com/Tencent/DepthCrafter.git
- Install dependencies:
cd DepthCrafter
uv venv
source .venv/bin/activate
uv sync
uv pip list
🤗 Model Zoo
DepthCrafter is available in the Hugging Face Model Hub.
🏃♂️ Inference
1. High-resolution inference, requires a GPU with ~26GB memory for 1024x576 resolution:
-
~2.1 fps on A100, recommended for high-quality results:
python run.py --video-path examples/example_01.mp4
2. Low-resolution inference requires a GPU with ~9GB memory for 512x256 resolution:
-
~8.6 fps on A100:
python run.py --video-path examples/example_01.mp4 --max-res 512
🚀 Dataset Evaluation
Please check the benchmark folder.
- To create the dataset we use in the paper, you need to run
dataset_extract/dataset_extract_${dataset_name}.py. - Then you will get the
csvfiles that save the relative root of extracted RGB video and depth npz files. We also provide these csv files. - Inference for all datasets scripts:
(Remember to replace thebash benchmark/infer/infer.shinput_rgb_rootandsaved_rootwith your path.) - Evaluation for all datasets scripts:
(Remember to replace thebash benchmark/eval/eval.shpred_disp_rootandgt_disp_rootwith your wpath.)
🤝🍻 Contributing
-
Welcome to open issues and pull requests.
-
Welcome to optimize the inference speed and memory usage, e.g., through model quantization, distillation, or other acceleration techniques.
Contributors
<a href="https://github.com/Tencent/DepthCrafter/graphs/contributors"> <img src="https://contrib.rocks/image?repo=Tencent/DepthCrafter" /> </a>
🧪 Testing
We provide comprehensive unit tests to ensure code quality and reliability.
Running Tests
- Run all tests:
pytest unit_tests/
- Run tests with verbose output:
pytest unit_tests/ -v
- Run specific test file:
pytest unit_tests/test_depth_crafter_ppl.py
Test Structure
unit_tests/test_depth_crafter_ppl.py: Tests for the main depth estimation pipelineunit_tests/test_inference.py: Tests for the inference interfaceunit_tests/test_utils.py: Tests for utility functionsunit_tests/test_unet.py: Tests for the UNet model
Requirements
- GPU with CUDA support is required for
test_pipeline_gpu_integration - Tests use small tensor sizes to minimize memory usage
- All heavy computations are mocked for fast execution
Star History
📜 Citation
If you find this work helpful, please consider citing:
@inproceedings{hu2025-DepthCrafter,
author = {Hu, Wenbo and Gao, Xiangjun and Li, Xiaoyu and Zhao, Sijie and Cun, Xiaodong and Zhang, Yong and Quan, Long and Shan, Ying},
title = {DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos},
booktitle = {CVPR},
year = {2025}
}
Related Skills
qqbot-channel
352.0kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.6k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
352.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
