SkillAgentSearch skills...

LanPaint

High quality training free inpaint for every stable diffusion model. Supports ComfyUI

Install / Use

/learn @scraed/LanPaint

README

<div align="center">

LanPaint: Universal Inpainting Sampler with "Think Mode"

TMLR PDF Python Benchmark ComfyUI Extension Hugging Face Blog GitHub stars Discord

</div>

Universally applicable inpainting ability for every model. LanPaint sampler lets the model "think" through multiple iterations before denoising, enabling you to invest more computation time for superior inpainting quality.

This is the official implementation of "LanPaint: Training-Free Diffusion Inpainting with Asymptotically Exact and Fast Conditional Sampling", accepted by TMLR.

The repository is for ComfyUI extension.

Diffusers Support: LanPaint-Diffusers by @charrywhite

Benchmark code for paper reproduce: LanPaintBench.

Citation

@article{
zheng2025lanpaint,
title={LanPaint: Training-Free Diffusion Inpainting with Asymptotically Exact and Fast Conditional Sampling},
author={Candi Zheng and Yuan Lan and Yang Wang},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=JPC8JyOUSW},
note={}
}

🎉 NEW 2026: Join our discord!

Join our Discord to share experiences, discuss features, and explore future development.

v1.5.0 fixes an important hidden bug that reduced performance and could blur images (especially with z-image-base) and also boosts overall LanPaint performance across other models. If your inpainting results have wierd (glowing / broken) mask boundary, check this issue.

🎬 NEW: LanPaint now supports inpainting and outpainting based on Z-Image!

| Original | Masked | Inpainted | |:--------:|:------:|:---------:| | Original Z-image | Masked Z-image | Inpainted Z-image |

🎬 NEW: LanPaint now supports Z-Image-Base too!

| Original | Masked | Inpainted | |:--------:|:------:|:---------:| | Original Z-image-base | Masked Z-image-base | Inpainted Z-image-base |

🎬 NEW: LanPaint now supports video inpainting and outpainting based on Wan 2.2!

<div align="center">

| Original Video | Mask (edit T-shirt text) | Inpainted Result | |:--------------:|:----:|:----------------:| | Original | Mask | Result |

Video Inpainting Example: 81 frames with temporal consistency

</div>

Check our latest Wan 2.2 Video Examples, Wan 2.2 Image Examples, and Qwen Image Edit 2509 support.

Table of Contents

Features

  • Universal Compatibility – Works instantly with almost any model (Z-image, Z-image-base, Hunyuan, Wan 2.2, Qwen Image/Edit, HiDream, SD 3.5, Flux-series, SDXL, SD 1.5 or custom LoRAs) and ControlNet.
    Inpainting Result 13
  • No Training Needed – Works out of the box with your existing model.
  • Easy to Use – Same workflow as standard ComfyUI KSampler.
  • Flexible Masking – Supports any mask shape, size, or position for inpainting/outpainting.
  • No Workarounds – Generates 100% new content (no blending or smoothing) without relying on partial denoising.
  • Beyond Inpainting – You can even use it as a simple way to generate consistent characters.

Warning: LanPaint has degraded performance on distillation models, such as Flux.dev, due to a similar issue with LORA training. Please use low flux guidance (1.0-2.0) to mitigate this issue.

Quickstart

  1. Install ComfyUI: Follow the official ComfyUI installation guide to set up ComfyUI on your system. Or ensure your ComfyUI version > 0.3.11.
  2. Install ComfyUI-Manager: Add the ComfyUI-Manager for easy extension management.
  3. Install LanPaint Nodes:
    • Via ComfyUI-Manager: Search for "LanPaint" in the manager and install it directly.
    • Manually: Click "Install via Git URL" in ComfyUI-Manager and input the GitHub repository link:
      https://github.com/scraed/LanPaint.git
      
      Alternatively, clone this repository into the ComfyUI/custom_nodes folder.
  4. Restart ComfyUI: Restart ComfyUI to load the LanPaint nodes.

Once installed, you'll find the LanPaint nodes under the "sampling" category in ComfyUI. Use them just like the default KSampler for high-quality inpainting!

How to Use Examples:

  1. Navigate to the example folder (i.e example_1), download all pictures.
  2. Drag InPainted_Drag_Me_to_ComfyUI.png into ComfyUI to load the workflow.
  3. Download the required model (i.e clicking Model Used in This Example).
  4. Load the model in ComfyUI.
  5. Upload Masked_Load_Me_in_Loader.png to the "Load image" node in the "Mask image for inpainting" group (second from left), or the Prepare Image node.
  6. Queue the task, you will get inpainted results from LanPaint. Some example also gives you inpainted results from the following methods for comparison:

Video Examples (Beta)

LanPaint now supports video inpainting with Wan 2.2, enabling you to seamlessly inpaint masked regions across video frames while maintaining temporal consistency.

Note: LanPaint supports video inpainting for longer sequences (e.g., 81 frames), but processing time increases significantly (please check the Resource Consumption section for details) and performance may become unstable. For optimal results and stability, we recommend limiting video inpainting to 40 frames or fewer.

Wan 2.2 Video Inpainting

Example: Wan2.2 t2v 14B, 480p video (11:6), 40 frames, LanPaint K Sampler, 2 steps of thinking

| Original Video | Mask (Add a white hat) | Inpainted Result | |:--------------:|:----:|:----------------:| | ![Original

Related Skills

View on GitHub
GitHub Stars1.1k
CategoryCustomer
Updated48m ago
Forks34

Languages

Python

Security Score

100/100

Audited on Mar 27, 2026

No findings