StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
Install / Use
/learn @sakalond/StableGenREADME
StableGen: AI-Powered 3D Generation & Texturing in Blender ✨
Create 3D assets from images and prompts, then texture and refine them - all inside Blender.
StableGen is an open-source Blender addon that brings generative AI into your 3D workflow. Generate fully textured 3D meshes from a single image or text prompt via TRELLIS.2, then texture and refine them - or any existing model - using SDXL, FLUX.1-dev, or Qwen Image Edit through a flexible ComfyUI backend.
<details> <summary><strong>Table of Contents</strong></summary>
- 🌟 Key Features
- 🚀 Showcase Gallery
- 🛠️ How It Works
- 💻 System Requirements
- ⚙️ Installation
- 🚀 Quick Start Guide
- 📖 Usage & Parameters Overview
- 📁 Output Directory Structure
- 🤔 Troubleshooting
- 🤝 Contributing
- 📜 License
- 🙏 Acknowledgements
- 💡 List of planned features
- 📧 Contact
🌟 Key Features
StableGen brings AI-powered 3D generation and texturing directly into Blender:
- 🧊 TRELLIS.2: Image & Prompt to 3D:
- Generate fully textured 3D meshes from a single reference image or text prompt using Microsoft's TRELLIS.2 (4B-parameter model).
- Multiple resolution modes: 512, 1024, 1024 Cascade (recommended), and 1536 Cascade for maximum geometric detail.
- Flexible texture pipeline: Use TRELLIS.2's native PBR textures, or automatically texture the generated mesh with SDXL, FLUX.1-dev, or Qwen Image Edit for higher-quality diffusion textures.
- Preview Gallery: Generate multiple candidate images with different seeds and pick the best before committing to 3D generation.
- Smart mesh handling: Auto-recovery from mesh corruption, configurable decimation/remeshing, import scaling, and studio lighting setup.
- VRAM-conscious: disk offloading, configurable attention backend
- Powered by ComfyUI-TRELLIS2 (installable via
installer.py).
- 🌍 Scene-Wide Multi-Mesh Texturing:
- Don't just texture one mesh at a time! StableGen is designed to apply textures to all mesh objects in your scene simultaneously from your defined camera viewpoints. Alternatively, you can choose to texture only selected objects.
- Achieve a cohesive look across entire environments or collections of assets in a single generation pass.
- Ideal for concept art, look development for complex scenes, and batch-texturing asset libraries.
- 🎨 Multi-View Consistency:
- Sequential Mode: Generates textures viewpoint by viewpoint on each mesh, using inpainting and visibility masks for high consistency across complex surfaces.
- Grid Mode: Processes multiple viewpoints for all meshes simultaneously for faster previews. Includes an optional refinement pass.
- Sophisticated weighted blending ensures smooth transitions between views.
- 📷 Advanced Camera Placement:
- 7 placement strategies: Orbit Ring, Fan Arc, Hemisphere, PCA-Axis, Normal-Weighted K-means, Greedy Occlusion Coverage, and Interactive Visibility-Weighted placement.
- Per-camera optimal aspect ratios - each camera gets its own resolution computed from the mesh's silhouette, so no pixels are wasted on letterboxing.
- Unlimited cameras - no more 8-camera limit.
- Camera generation order - drag-and-drop reorder list with 6 preset strategies to control the processing order in Sequential mode.
- Camera cloning, mirroring, and floating viewport prompt labels.
- 🎯 Local Edit Mode:
- Point cameras at specific areas to modify - new texture blends seamlessly over the original using angle-based and vignette-based feathering.
- Separate angle ramp and silhouette edge feathering controls for precise blending.
- Works with all architectures (SDXL, Flux, Qwen Image Edit).
- 📐 Precise Geometric Control with ControlNet:
- Leverage multiple ControlNet units (Depth, Canny, Normal) simultaneously to ensure generated textures respect your model's geometry.
- Fine-tune strength, start/end steps for each ControlNet unit.
- Supports custom ControlNet model mapping.
- 🖌️ Powerful Style Guidance with IPAdapter:
- Use external reference images to guide the style, mood, and content of your textures with IPAdapter.
- Employ IPAdapter without an reference image for enhanced consistency in multi-view generation modes.
- Control IPAdapter strength, weight type, and active steps.
- ⚙️ Flexible ComfyUI Backend:
- Connects to your existing ComfyUI installation, allowing you to use your preferred SDXL checkpoints, custom LoRAs, and the new Qwen Image Edit workflow alongside experimental FLUX.1-dev support.
- Offloads heavy computation to the ComfyUI server, keeping Blender mostly responsive.
- ✨ Advanced Inpainting & Refinement:
- Refine Mode (Img2Img): Re-style, enhance, or add detail to existing textures (StableGen generated or otherwise) using an image-to-image process.
- Local Edit Mode: Selectively modify specific areas while preserving the rest, with independent angle and vignette feathering controls.
- UV Inpaint Mode: Intelligently fills untextured areas directly on your model's UV map using surrounding texture context.
- Color Matching: Match each generated view's colors to the current texture before blending, using multiple algorithms (MKL, Reinhard, Histogram, MVGD).
- 🛠️ Integrated Workflow Tools:
- Camera Setup: Quickly add and arrange multiple cameras with 7 placement strategies, per-camera aspect ratios, interactive occlusion preview, and customizable generation order.
- View-Specific Prompts: Assign unique text prompts to individual camera viewpoints for targeted details.
- Texture Baking: Convert complex procedural StableGen materials into standard UV image textures. "Flatten for Refine" option lets you bake and continue editing.
- Debug Tools: Visualize projection coverage, UV alignment, and weight blending without running AI generation.
- HDRI Setup, Modifier Application, Curve Conversion, GIF/MP4 Export & Reproject.**
- 📋 Preset System:
- Get started quickly with built-in presets for common scenarios (e.g., "Default", "Characters", "Quick Draft").
- Save and manage your own custom parameter configurations for repeatable workflows.
🚀 Showcase Gallery
<details open> <summary>See what StableGen can do!</summary><sub>Tip: Refresh the page to synchronize all GIF animations.</sub>
Showcase 1: Text-to-3D (SDXL)
Assets generated entirely from a text prompt using the TRELLIS.2 pipeline with SDXL-based texturing.
| Dragon | Wizard | Hut | | :------: | :------: | :------: | | <img src="docs/img/trellis2/sdxl_dragon.gif" alt="Fantasy dragon" width="200"> | <img src="docs/img/trellis2/sdxl_wizard.gif" alt="Wizard character" width="200"> | <img src="docs/img/trellis2/sdxl_hut.gif" alt="Hut" width="200"> | | Telescope | Robot | Cyber Ninja | | <img src="docs/img/trellis2/sdxl_telescope.gif" alt="Telescope" width="200"> | <img src="docs/img/trellis2/sdxl_robot.gif" alt="Robot" width="200"> | <img src="docs/img/trellis2/sdxl_cyber_ninja.gif" alt="Cyber Ninja" width="200"> |
<details> <summary>Prompts used</summary>- Dragon: "fantasy dragon"
- Wizard: "wizard character, intricate embroidered purple and gold robes, pointed hat, wooden staff with glowing crystal, leather belt with pouches, fantasy character concept art, 4k"
- Hut: "house, small house, cozy, wooden, hut"
- Telescope: "antique brass telescope, tarnished patina with bright spots from handling, leather grip wrap, extended sections, mahogany tripod, product photography, 4k"
- Robot: "giant robot, mecha, cyberpunk style, sci-fi, white body, intricate details, neon accents"
- Cyber Ninja: "full body character, neutral pose, cyber-ninja, futuristic assassin, matte black carbon fiber stealth suit, hexagonal weave pattern, faceless helmet, glowing red neon visor slit, metallic silver shoulder armor, cyberpunk aesthetic, high contrast materials, unreal engine 5 render"
Showcase 2: Text-to-3D (Qwen)
Text-to-3D via TRELLIS.2 with Qwen Image Edit texturing - well-suited for stylized objects and crisp details.
| Barrel | Chest | Crate | | :------: | :------: | :------: | | <img src="docs/img/trellis2/qwen_barrel.gif" alt="Barrel" width="200"> | <img src="docs/img/trellis2/qwen_chest.gif" alt="Chest" width="200"> | <img src="docs/img/trellis2/qwen_crate.gif" alt="Crate" width="200"> | | Obelisk | Robot | Tree Stump | | <img src="docs/img/trellis2/qwen_obelisk.gif" alt="Obelisk" width="200"> | <img src="docs/img/trellis2/qwen_robot.gif" alt="Robot" width="200"> | <img src="docs/img/trellis2/qwen_tree_stump.gif" alt="Tree Stump" width="200"> |
<details> <summary>Prompts used</summary>- Barrel: *"A chunky, stylized wooden barrel bound by thick, oversized iron hoops. The wo
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
343.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
343.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
