Sign In

Hydra - Anything to Stable Cascade

Hydra - Anything to Stable Cascade

model page: https://civitai.com/models/317950

Hydra is a workflow series with a single goal, allowing any model to be used with Cascade. Although this is strictly impossible, we can approximate using SDXL Control Nets then pass images into Clip Vision for Cascade.

Hydra-v43 (full sized)
- separate prompt and image inputs for SDXL and Cascade
- passes array of generated images to Cascade
- SDXL used controlnet, Cascade used img2img+vision
(faceswap optional, bypass with ctrl+B)

Hydra-v54 (medium sized)
- based on v43, Simplified for speed.
- combined prompt & image input
- SDXL uses ControlNets
(reduced custom nodes)


Hydra-v65 (standard sized)
- based on v54, Simplified for speed.
- combined prompt & image input
- SDXL/Lora txt2img
(reduced custom nodes)

Hydra-v69
- based on v65, uses SD2.1 models/Loras
- SD2.1/Lora txt2img
(reduced custom nodes)

Hydra-v76
- based on v65, uses SD1.5 models/Loras
- SD1.5/Lora txt2img
(reduced custom nodes)



The lower numbered versions are more complex in this series. It started with the most control and then simplified and removed features to reach the basic versions.

v43 offers the most control. using SDXL it generates 6 images, the final 3 will be sent to Vision. toggle bypassed nodes to use all 6. Cascade will then do img2img with the vision inputs, which gets us close to the output styles from older models. v54 is a lighter simpler version which eliminated Cascade Vision and relies on img2img + prompts instead, easier to use but less accurate. v65 drops control nets to allow people to use it without having to download a big collection of controlnet models.

v76 & v88 are designed to allow any SD1.5 and SD2.1 models to take advantage of this pipeline into Stable Cascade. adjust the denoise on Stage C to control the strength of Cascade.

light = 0.3
mid = 0.5
heavy = 0.7



SDXL benefits from Controlnet and lora so you can create a spread of images to prime the Cascade generation. With careful prompting and a good input image, all you need to do is alter the denoise on the Stage C node.

try sending an output into the Cascade to get even closer to an SDXL output, or mix prompts and images to experiment blending the same input image in cascade.

This is a complicated workflow, while you can install any missing nodes and models, the video will explain a lot of detailed information about using the workflow. If you have problems, simpler versions will be released soon, aiming to use less custom nodes to achieve the same thing.



Faster operation:


Once you have generated the SDXL images, consider using the seed C control. if you only change the Cascade section, and do not change the SDXL section, it will only generate from that point and will not regenerate all those images again.



built in hires:


two stage ksamplers will fix some render problems

did you read to the bottom?


well then i'll let you in on a little secret, if you are proficient with comfyui, you will be able to easily use this workflow to generate a collection of images. These images are then pushed into the img2img process with Cascade's Clip Vision feature (like a low rent dreambooth dataset). It also works with any stable diffusion model. if you change the checkpoint, loras and controlnets to match SD1.5, SD2.1 or SDXL, they will all work fine, because images are bridging the gap to interface with Cascade.



www.fivebelowfive.uk

9

Comments