SkillAgentSearch skills...

AVeryComfyNerd

ComfyUI related stuff and things

Install / Use

/learn @nerdyrodent/AVeryComfyNerd
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Overview

A variety of ComfyUI related workflows and other stuff. You'll need different models and custom nodes for each different workflow. As this page has multiple headings you'll need to scroll down to see more.

Resources

You'll need models and other resources for ComfyUI. Check the table below for links to everything from ControlNet models to Upscalers

Item | Description | Link | --- | --- | --- | ComfyUI | The main thing you'll need! | https://github.com/comfyanonymous/ComfyUI<br>See https://youtu.be/2r3uM_b3zA8 for an install guide ComfyUI Manager | Install any missing nodes using this | https://github.com/ltdrdata/ComfyUI-Manager Stability AI | Models & VAEs | https://huggingface.co/stabilityai Text-to-Image models | Text-2-image models | https://huggingface.co/models?pipeline_tag=text-to-image&sort=trending SSD-1B | Text2-image model | https://huggingface.co/segmind/SSD-1B ControlNet Models | ControlNet Models | https://huggingface.co/lllyasviel/sd_control_collection/tree/main<br>https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/tree/main QR Code Monster Control Net | ControlNet Model | https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster IP Adapter | Github Repo | https://github.com/tencent-ailab/IP-Adapter IP Adapter models | Models | https://huggingface.co/h94/IP-Adapter T2I Adapter | Github Repo | https://github.com/TencentARC/T2I-Adapter Control LoRA | Control Models | https://huggingface.co/stabilityai/control-lora AnimateDiff | Original repo, many links and more info | https://github.com/guoyww/AnimateDiff Latent Consistency Models | Models | https://huggingface.co/latent-consistency Upscale Wiki | Many models & info | https://upscale.wiki/wiki/Main_Page Artist Style Studies | SDXL Prompt output examples for inspiration | https://sdxl.parrotzone.art/

List of workflows available

In ComfyUI the image IS the workflow. Simply drag or load a workflow image into ComfyUI! See the "troubleshooting" section if your local install is giving errors :)

Workflow | Description | Version | --- | --- | --- | <img src="workflows/SDXL/SDXL_Depth_Badger.png" width="256px"></img> | Basic SDXL ControlNet workflow.<br>Introductory SDXL Canny & Depth ControlNet example.<br>See https://youtu.be/reqamcrPYiM for more information. | SDXL <img src="workflows/SD15/nr_sd15_QR_Monster.png" width="256px"></img>| Basic QR Code Monster SD 1.5 ControlNet - make spiral art!<br>See also - https://youtu.be/D4oJz0w36ps | SD 1.5 <img src="workflows/SD15/nr_sd15_QR_Monster_AnimateDiff_LatentUpscale.png " width="256px"></img>| QR Code Monster SD 1.5 ControlNet - make animated spiral art!<br>See also: https://youtu.be/D4oJz0w36ps | SD 1.5 <img src="workflows/SD15/AnimateDIff_FreeU.png" width="256px"></img> | Updated QR Code Monster SD 1.5 ControlNet with AnimateDiff and FreeU<br>Works best with the v1 QR Code Monster - https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster | SD 1.5 <img src="workflows/SD15/AnimateDiff_MotionLoRA.png" width="256px"></img> | AnimateDiff with Montion LoRA example. Pan up, down, left right, etc. | SD 1.5 <img src="workflows/SD15/Instant_LoRA_1.png" width="256px"></img>|Instant LoRA 1<br>Inspired by <a href="https://civitai.com/articles/2345/aloeveras-instant-lora-no-training-15-sdxl">AloeVera</a> (almost identical).<br>Really simple, no training, "LoRA" like functionality.<br>SD 1.5. IP Adapter models:<br>1. https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus_sd15.bin -> custom_nodes/IPAdapter-ComfyUI/models.<br>2. https://huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors -> models/clip_vision.<br>NB (2024). As IPAdapter-ComfyUI from 2023 is now deprecated, you should replace it with a currently supported version before use<br>Video guide - https://youtu.be/HtmIC6fqsMQ | SD 1.5 <img src="workflows/SD15/Instant_LoRA_2.png" width="256px"></img>|Instant LoRA 2<br>As above, but with ControlNet to guide the shape | SD 1.5 <img src="workflows/SD15/Instant_LoRA_3.png" width="256px"></img>|Instant LoRA 3<br>As above, but with QR Code Monster ControlNet too :) | SD 1.5 <img src="workflows/SD15/Instant_LoRA_4.png" width="256px"></img>|Instant LoRA 4<br>As above, but with basic upscaling | SD 1.5 <img src="workflows/SD15/Instant_LoRA_5.png" width="256px"></img>|Instant LoRA 5<br>As above, but with more upscaling to 16k+ | SD 1.5 <img src="workflows/SD15/Instant_LoRA_6.png" width="256px"></img>|Instant LoRA 6<br>As above, but different upscaling to 16k+ | SD 1.5 <img src="workflows/SD15/PromptTravel_AnimateDiff_IPAdapter.png" width="256px"></img>|Morphing AI videos of any length using AnimateDiff. SD 1.5. Includes IPAdapter & Upscaling. IP Adapter models:<br>1. https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus_sd15.bin -> custom_nodes/IPAdapter-ComfyUI/models.<br>2. https://huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors -> models/clip_vision.<br>Video guide - https://youtu.be/6A3a0QNPhIs | SD 1.5 <img src="workflows/SD15/PromptTravel_AnimateDiff.png" width="256px"></img>|Morphing AI videos of any length using AnimateDiff. SD 1.5. Includes Upscaling. Like above, but without IPAdapter controls. | SD 1.5 <img src="workflows/SDXL/SDXL_Instant_LoRA_1.png" width="256px"></img>|SDXL "Instant LoRA" - basic.<br>Really simple, no training, "LoRA" like functionality.<br>Uses SDXL IP Adapter - https://huggingface.co/h94/IP-Adapter<br>Video - https://youtu.be/dGL02W4QatI | SDXL <img src="workflows/SDXL/SDXL_Instant_LoRA_2.png" width="256px"></img>|SDXL "Instant LoRA" - with CLIP Vision<br>Uses SDXL IP Adapter - https://huggingface.co/h94/IP-Adapter<br>Also use "Revisions" CLIP vision - https://huggingface.co/stabilityai/control-lora | SDXL <img src="workflows/SDXL/SDXL_Instant_LoRA_3.png" width="256px"></img>|SDXL "Instant LoRA" - with CLIP Vision & ControlNet<br>Uses SDXL IP Adapter - https://huggingface.co/h94/IP-Adapter<br>Also use "Revisions" CLIP vision - https://huggingface.co/stabilityai/control-lora | SDXL <img src="workflows/SD15/AnimateDiff_QRCode_Video.png" width="256px"></img>|AnimateDiff + QRCode (Vid2Vid)<br>Use any high-contrast input video to create guided animations! Spirals away... | SD 1.5 <img src="workflows/SD15/Reposer2.png" width="256px"></img><br><img src="workflows/SD15/Reposer_Plus_bypass.png" width="256px"></img></img>|SD 1.5 Reposer (2 versions) - single face image to any pose. Get consistent faces!<br>No "roop" or similar face-swapping nodes required = easy install!<br>SD 1.5 ControlNet models:<br>https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/tree/main<br>IP Adapter models:<br>1. Face = https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus-face_sd15.bin<br>2. Vision = https://huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors<br>NOTE Reposer2.png now uses the even more updated version of IPAdapter<br>Reposer Plus Bypass Edition is deprecated, but still available for download if you want to update any nodes at home.<br>Original Reposer Basic Video guide - https://youtu.be/SacK9tMVNUA<br>Original Reposer Plus Video guide - https://youtu.be/ZcCfwTkYSz8 | SD 1.5 <img src="workflows/SD15/Video_Restyler.png" width="256px">|SD 1.5 Video Styler! Combining IPAdapter with Video-to-video for strange styles and weird animations<br>Uses https://github.com/cubiq/ComfyUI_IPAdapter_plus<br>The pre-trained models are available on huggingface, download and place them in the ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/models directory.<br>For SD1.5 you need:<br>* ip-adapter_sd15.bin<br>* ip-adapter_sd15_light.bin<br>* ip-adapter-plus_sd15.bin<br>* ip-adapter-plus-face_sd15.bin<br>Additionally you need the image encoder to be placed in the ComfyUI/models/clip_vision/ directory.<br>They are the same models used by the other IPAdapter custom nodes ;) - symlinks are your friend!<br>Video guide - https://youtu.be/kJp8JzA2aVU | SD 1.5 <img src="workflows/SDXL/SDXL_Reposer_Basic.png" width="256px">|SDXL version of Reposer using the SDXL "IPAdapter Plus Face" model<br>Pick a face then add a body in any pose - no training!<br>Works with photorealistic faces, anime faces, cartoon faces, etc | SDXL <img src="workflows/SDXL/SSD1B-SDXL-8GB.png" width="256px">|SSD-1B Workflow - SDXL for 8GB VRAM cards!<br>Model - https://huggingface.co/segmind/SSD-1B<br>Video - https://youtu.be/F-bKndyQ7L8|SSD-1B <img src="workflows/SD15/LCM_LoRA_Compare.png" width="256px">|LCM LoRA vs Normal|1.5, SDXL, SSD-1B <img src="workflows/SD15/SD15_IPAdapterMask_Upscale.png" width="256px">|IPAdapter Attention Masking Example<br>Video https://youtu.be/riLmjBlywcg|1.5 <img src="workflows/SD15/SD15_LCM_IPAdapter_Facefix.png" width="256px">|IPAdapter Attention Masking Example with extra toppings (LCM, Facefix)<br>Video https://youtu.be/riLmjBlywcg|1.5 <img src="workflows/SDCore/SVD_Basic_Upscale.png" width="256px">|Stable Video Diffusion example with a simple upscale and frame interpolation|SVD <img src="workflows/SDCore/SDXL_Turbo_Basic.png" width="256px">|SDXL Turbo - 1 step diffusion!|SDXL Turbo, SD2 Turbo <img src="workflows/SD15/ComfyMagicAnimate.png" width="256px">|A very basic attempt at a "Comfy MagicAnimate". Needs more work :)<br>Links:<br>Magic Animate - https://github.com/magic-research/magic-animate<br>Magic Animate (Windows) - https://github.com/sdbds/magic-animate-for-windows<br>DreamingTulpa - https://twitter.com/dreamingtulpa/status/1730876691755450572<br>CocktailPeanut - https://twitter.com/cocktailpeanut/status/1732052909720797524<br>Google Colab - https://github.com/camenduru/MagicAnimate-colab<br>Huggingface Space - https://huggingface.co/spaces/zcxu-eric/magicanimate<br>Vid2DensePose - https://github.com/Flode-Labs/vid2densepose<br><br>Model Downloads for the MagicAnimate Grad

View on GitHub
GitHub Stars1.3k
CategoryDevelopment
Updated14d ago
Forks91

Security Score

95/100

Audited on Mar 20, 2026

No findings