InstantStyle
InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥
Install / Use
/learn @instantX-research/InstantStyleREADME
Haofan Wang<sup>*</sup> · Matteo Spinelli · Qixun Wang · Xu Bai · Zekui Qin · Anthony Chen
InstantX Team
<sup>*</sup>corresponding authors
<a href='https://instantstyle.github.io/'><img src='https://img.shields.io/badge/Project-Page-green'></a>
<a href='https://arxiv.org/abs/2404.02733'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
InstantStyle is a general framework that employs two straightforward yet potent techniques for achieving an effective disentanglement of style and content from reference images.
<!-- <img src='assets/pipe.png'> --> <div align="center"> <img src='assets/page0.png' width = 900 > </div>Principle
Separating Content from Image. Benefit from the good characterization of CLIP global features, after subtracting the content text fea- tures from the image features, the style and content can be explicitly decoupled. Although simple, this strategy is quite effective in mitigating content leakage.
<p align="center"> <img src="assets/subtraction.png"> </p>Injecting into Style Blocks Only. Empirically, each layer of a deep network captures different semantic information the key observation in our work is that there exists two specific attention layers handling style. Specifically, we find up blocks.0.attentions.1 and down blocks.2.attentions.1 capture style (color, material, atmosphere) and spatial layout (structure, composition) respectively.
<p align="center"> <img src="assets/tree.png"> </p>Release
- [2024/07/06] 🔥 We release CSGO page for content-style composition. Code will be released soon.
- [2024/07/01] 🔥 We release InstantStyle-Plus report for content preserving.
- [2024/04/29] 🔥 We support InstantStyle natively in diffusers, usage can be found here
- [2024/04/24] 🔥 InstantStyle for fast generation, find demos at InstantStyle-SDXL-Lightning and InstantStyle-Hyper-SDXL.
- [2024/04/24] 🔥 We support HiDiffusion for generating highres images, find more information here.
- [2024/04/23] 🔥 InstantStyle has been natively supported in diffusers, more information can be found here.
- [2024/04/20] 🔥 InstantStyle is supported in Mikubill/sd-webui-controlnet.
- [2024/04/11] 🔥 We add the experimental distributed inference feature. Check it here.
- [2024/04/10] 🔥 We support an online demo on ModelScope.
- [2024/04/09] 🔥 We support an online demo on Huggingface.
- [2024/04/09] 🔥 We support SDXL-inpainting, more information can be found here.
- [2024/04/08] 🔥 InstantStyle is supported in AnyV2V for stylized video-to-video editing, demo can be found here.
- [2024/04/07] 🔥 We support image-based stylization, more information can be found here.
- [2024/04/07] 🔥 We support an experimental version for SD1.5, more information can be found here.
- [2024/04/03] 🔥 InstantStyle is supported in ComfyUI_IPAdapter_plus developed by our co-author.
- [2024/04/03] 🔥 We release the technical report.
Demos
Stylized Synthesis
<p align="center"> <img src="assets/example1.png"> <img src="assets/example2.png"> </p>Image-based Stylized Synthesis
<p align="center"> <img src="assets/example3.png"> </p>Comparison with Previous Works
<p align="center"> <img src="assets/comparison.png"> </p>Download
Follow IP-Adapter to download pre-trained checkpoints from here.
git clone https://github.com/InstantStyle/InstantStyle.git
cd InstantStyle
# download the models
git lfs install
git clone https://huggingface.co/h94/IP-Adapter
mv IP-Adapter/models models
mv IP-Adapter/sdxl_models sdxl_models
Usage
Our method is fully compatible with IP-Adapter. For feature subtraction, it only works for global feature instead of patch features. For SD1.5, you can find a demo at infer_style_sd15.py, but we find that SD1.5 has weaker perception and understanding of style information, thus this demo is experimental only. All block names can be found in attn_blocks.py and attn_blocks_sd15.py for SDXL and SD1.5 respectively.
import torch
from diffusers import StableDiffusionXLPipeline
from PIL import Image
from ip_adapter import IPAdapterXL
base_model_path = "stabilityai/stable-diffusion-xl-base-1.0"
image_encoder_path = "sdxl_models/image_encoder"
ip_ckpt = "sdxl_models/ip-adapter_sdxl.bin"
device = "cuda"
# load SDXL pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
add_watermarker=False,
)
# reduce memory consumption
pipe.enable_vae_tiling()
# load ip-adapter
# target_blocks=["block"] for original IP-Adapter
# target_blocks=["up_blocks.0.attentions.1"] for style blocks only
# target_blocks = ["up_blocks.0.attentions.1", "down_blocks.2.attentions.1"] # for style+layout blocks
ip_model = IPAdapterXL(pipe, image_encoder_path, ip_ckpt, device, target_blocks=["up_blocks.0.attentions.1"])
image = "./assets/0.jpg"
image = Image.open(image)
image.resize((512, 512))
# generate image variations with only image prompt
images = ip_model.generate(pil_image=image,
prompt="a cat, masterpiece, best quality, high quality",
negative_prompt= "text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry",
scale=1.0,
guidance_scale=5,
num_samples=1,
num_inference_steps=30,
seed=42,
#neg_content_prompt="a rabbit",
#neg_content_scale=0.5,
)
images[0].save("result.png")
Use in diffusers
InstantStyle has already been integrated into diffusers (please make sure that you have installed diffusers>=0.28.0.dev0), making the usage significantly simpler. You can now control the per-transformer behavior of each IP-Adapter with the set_ip_adapter_scale() method, using a configuration dictionary as shown below:
from diffusers import StableDiffusionXLPipeline
from PIL import Image
import torch
# load SDXL pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
add_watermarker=False,
)
# load ip-adapter
pipe.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
pipe.enable_vae_tiling()
# configure ip-adapter scales.
scale = {
"down": {"block_2": [0.0, 1.0]},
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
In this example. We set scale=1.0 for IP-Adapter in the second transformer of down-part, block 2, and the second in up-part, block 0. Note that there are 2 transformers in down-part block 2 so the list is of length 2, and so do the up-part block 0. The rest IP-Adapter will have a zero scale which means disable them in all the other layers.
With the help of set_ip_adapter_scale(), we can now configure IP-Adapters without a need of reloading them everytime we want to test the IP-Adapter behaviors.
# for original IP-Adapter
scale = 1.0
pipeline.set_ip_adapter_scale(scale)
# for style blocks only
scale = {
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
Multiple IP-Adapter images with masks
You can also load multiple IP-Adapters, together with multiple IP-Adapter images with masks for more precisely layout control just as that in [IP-Adapter](https://huggingface.co/docs/diffusers/main/en/using-diffusers/ip_adapter#ip-ad
Related Skills
qqbot-channel
350.8kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.5k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
350.8kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
ddd
Guía de Principios DDD para el Proyecto > 📚 Documento Complementario : Este documento define los principios y reglas de DDD. Para ver templates de código, ejemplos detallados y guías paso
