SGDiff
Official implementation for "Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training" https://arxiv.org/abs/2211.11138
Install / Use
/learn @YangLing0818/SGDiffREADME
Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training
<a href="https://arxiv.org/abs/2211.11138"><img src="https://img.shields.io/badge/arXiv-2211.11138-blue.svg" height=22.5></a>
Official Implementation for Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training.
🚩 New Updates : We release LAION-SG, a large-scale dataset with high-quality structural annotations of scene graphs (SG), which precisely describe attributes and relationships of multiple objects, effectively representing the semantic structure in complex scenes. Based on LAION-SG, we also provide a new foundation model SDXL-SG to incorporate structural annotation information into the generation process.
Overview of The Proposed SGDiff
<div align=center><img width="850" alt="image" src="https://user-images.githubusercontent.com/62683396/202852210-d91d6a63-f04d-4a02-ae5f-55f00f8c1ec5.png"></div>Environment
git clone https://github.com/YangLing0818/SGDiff.git
cd SGDiff
conda env create -f sgdiff.yaml
conda activate sgdiff
mkdir pretrained
Data and Model Preparation
The instructions of data pre-processing can be found here.
Our masked contrastive pre-trained models of SG-image pairs for COCO and VG datasets are provided in here, please download them and put them in the 'pretrained' directory.
And the pretrained VQVAE for embedding image to latent can be obtained from https://ommer-lab.com/files/latent-diffusion/vq-f8.zip
Masked Contrastive Pre-Training
The instructions of SG-image pretraining can be found in the folder "sg_image_pretraining/"
Diffusion Training
Kindly note that one should not skip the training stage and test directly. For single gpu, one can use
python trainer.py --base CONFIG_PATH -t --gpus 0,
NOT OFFICIAL: Alternatively, if you don't want to train the model from scratch you can download trained weights from the following link: VG weight, COCO weight
Checkpoint trained for only 150 epochs.
Sampling
python testset_ddim_sampler.py
Citation
If you found the codes are useful, please cite our paper
@article{yang2022diffusionsg,
title={Diffusion-based scene graph to image generation with masked contrastive pre-training},
author={Yang, Ling and Huang, Zhilin and Song, Yang and Hong, Shenda and Li, Guohao and Zhang, Wentao and Cui, Bin and Ghanem, Bernard and Yang, Ming-Hsuan},
journal={arXiv preprint arXiv:2211.11138},
year={2022}
}
@article{li2024laion,
title={LAION-SG: An Enhanced Large-Scale Dataset for Training Complex Image-Text Models with Structural Annotations},
author={Li, Zejian and Meng, Chenye and Li, Yize and Yang, Ling and Zhang, Shengyuan and Ma, Jiarui and Li, Jiayi and Yang, Guang and Yang, Changyuan and Yang, Zhiyuan and others},
journal={arXiv preprint arXiv:2412.08580},
year={2024}
}
Related Skills
qqbot-channel
347.9kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.2k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
347.9kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
2.9kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
