Type | Workflows |
Stats | 1,104 |
Reviews | (47) |
Published | Feb 21, 2024 |
Base Model | |
Hash | AutoV2 6959CE3229 |
video TBA
Update: v35 txt2img + Lora & Canny ControlNet
Update: v82-Cascade Anyone
The Checkpoint update has arrived !
New Checkpoint Method was released. All Workflows were refactored.
https://huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints
put both inside /models/checkpoints/
v30-txt2img
- updated workflow for new checkpoint method.
- Text 2 Image.
links at top
v32-txt2img-lora
- updated workflow for new checkpoint method.
- lora loader
- Text 2 Image.
links at top
v35-txt2img-canny
- updated workflow for new checkpoint method.
- lora loader
- ControlNet Canny
- Text 2 Image.
links at top
v40-img2img
- updated workflow for new checkpoint method.
- Image to Image with prompting, Image Variation by empty prompt.
links at top
v42-img2img-lora
- updated workflow for new checkpoint method.
- added Lora Loader for testing new trained Lora's
- Image to Image with prompting, Image Variation by empty prompt.
links at top
v45-img2img-canny
- updated workflow for new checkpoint method.
- Lora Loader
- canny support
- Image to Image with prompting, Image Variation by empty prompt.
links at top
v50-img2vision
- updated workflow for new checkpoint method.
- Image to CLIP Vision + Text Prompt.
links at top
v54-img2vision-lora
- updated workflow for new checkpoint method.
- added Lora Loader for testing new trained Lora's
- Image to CLIP Vision + Text Prompt.
links at top
v55-img2vision-canny
- updated workflow for new checkpoint method.
- Image to CLIP Vision + Text Prompt.
- adds canny support
links at top
v60-img2remix
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
links at top
v65-img2remix-canny
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
- adds canny support
links at top
v66-img2remix-lora
- updated workflow for new checkpoint method.
- added Lora Loader for testing new trained Lora's
- Multi-Image to CLIP Vision + Text Prompt.
links at top
v70-img2remix-faceswap
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
- Use an HD Face image with Reactor.
links at top
v75-img2faceswap-canny
- updated workflow for new checkpoint method.
- Multi-Image to CLIP Vision + Text Prompt.
- canny support added
- Use an HD Face image with Reactor.
links at top
v82-Cascade-Anyone
- Add high quality Face image with 4 character reference images using prompts.
- built from v70 to estimate custom characters without training or Cnet
links at top
v85-Anyone-canny
- Add high quality Face image with 4 character reference images using prompts.
- built from v70 to estimate custom characters without training or Cnet
- canny support added
links at top
v95-img2vision-canny
- Add 3 high quality reference images for Vision
- img2img with canny using the same image
- built from v85 to do complex remix variations
- canny control net and lora support added
links at top
UPDATE: removed Photomaker version, because it actually had no effect.
I want to stress that you MUST update your comfyUI to the latest version, you should also update ALL your custom nodes because there is no way to know which ones might have affect the UNET, CLIP and VAE spaces which cascade is now using to generate our images.
In addition, i have disabled a lot of custom nodes i did not need on that run. it's easy, just add ".disabled" to the folder name. This is what the button does in the manager. it's very easy to "switch off" some custom nodes in this way.
~
Everything Below applied to the early Method for loadings all the models here: official repo: https://huggingface.co/stabilityai/stable-cascade
~ i will leave the Early method here, for anyone wishing to use it :)
UltraBasic Stable Cascade Workflows for ComfyUI:
Article here: https://civitai.com/articles/4161
IMG2IMG UPDATE:
these older workflows were deprecated on day 4 by a new method, however still work fine.
v10 = txt2img Stable Cascade here: https://civitai.com/models/310409?modelVersionId=348385
v12 = v10 txt2img without custom nodes: https://civitai.com/models/310409?modelVersionId=351470
v16 = img2img (stage C) Stable Cascade Workflow here: https://civitai.com/models/310409?modelVersionId=351400
v17 = v16 img2img without custom nodes for scaling: https://civitai.com/models/310409?modelVersionId=351464
v18 = v16 img2img (stage B and C ) now supported by new default node: https://civitai.com/models/310409?modelVersionId=351658
You can squeeze it only any GPU if you use the correct combination.
These notes are in the Workflow also ;)
Cascade Combos:
stage_b + stage_c ~ 22GB
stage_b_bf16 + stage_c_bf16 ~ 12GB
stage_b_lite + stage_c_lite ~ 8GB
stage_b_lite_bf16 + stage_c_lite_bf16 ~ 5GB
I put together to paths you need to put all the models in case you had to manually DL each of them, due to a poor connection or whatever :)
Huggingface has the models we need, follow the chart below to find where they go
https://huggingface.co/stabilityai/stable-cascade
Text Encoder
ComfyUI Path: models\clip\Stable-Cascade\
HF Filename: /text_encoder/model.safetensors
text encoder CLIP = 1.39GB
Stage C
ComfyUI Path: models\unet\Stable-Cascade\
HF Filename: stage_c.safetensors
stage_c = 14.4GB
stage_c_bf16 = 7.18GB
stage_c_lite = 4.12GB
stage_c_lite_bf16 = 2.06GB
Stage B
ComfyUI Path: models\unet\Stable-Cascade\
HF Filename: stage_b.safetensors
stage_b = 6.25GB
stage_b_bf16 = 3.13GB
stage_b_lite = 2.8GB
stage_b_lite_bf16 = 1.4GB
Stage A
ComfyUI Path: models\vae\Stable-Cascade\
HF Filename: stage_a.safetensors
stage_a = 73.7mb
Effnet Encoder
ComfyUI Path: models\vae\Stable-Cascade\
HF Filename: effnet_encoder.safetensors
img2img VAE encoder = 81.5mb