Dreambooth
Fine-tuning of diffusion models
Install / Use
/learn @brian6091/DreamboothREADME
Fine-tuning of Stable Diffusion models
Run Dreambooth or Low-rank Adaptation (LoRA) from the same notebook:
<a target="_blank" href="https://colab.research.google.com/github/brian6091/Dreambooth/blob/main/FineTuning_colab.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" height="28px" width="162px" alt="Open In Colab"/> </a>$~$
Tested with Tesla T4 and A100 GPUs on Google Colab (some settings will not work on T4 due to limited memory)
Tested with Stable Diffusion v1-5 and Stable Diffusion v2-base.
This notebook borrows elements from ShivamShrirao's implementation, but is distinguished by some features:
- Based on main Hugging Face Diffusers🧨 so it's easy to stay up-to-date
- Low-rank Adaptation (LoRA) for faster and more efficient fine-tuning (using cloneofsimo's implementation)
- Data augmentation such as random cropping, flipping and resizing, which can minimize manually prepping and cropping images in certain cases (e.g., training a style)
- More parameters for experimentation (modify LoRA rank approximation, ADAM optimizer parameters, cosine_with_restarts learning rate scheduler, etc), all of which are dumped to a json file so you can remember what you did
- Drop some text-conditioning to improve classifier-free guidance sampling (e.g., how SD V1-5 was fine-tuned)
- Image captioning using filenames or associated textfiles
- Training loss and prior class loss are tracked separately (can be visualized using tensorboard)
- Option to generate exponentially-weighted moving average (EMA) weights for the unet
- Inference with trained models uses Diffusers🧨 pipelines, does not rely on any web-apps
$~$
Image comparing Dreambooth and LoRA (more information here):
<a><img src="https://drive.google.com/uc?id=1PQqL3omKCWStkrJgW3JecOrne3xqbScr"></a> full-size image here for the pixel-peepers
Related Skills
claude-opus-4-5-migration
83.4kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
338.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
mcp-for-beginners
15.7kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
TrendRadar
49.8k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
