Type | Workflows |
Stats | 2,515 |
Reviews | (102) |
Published | Jan 9, 2025 |
Base Model | |
Hash | AutoV2 1B5AD11851 |
HUNYUAN | AllInOne ▪ Fast - 16gbVram
no need to buzz me, ty💗 ..feedbacks are much more appreciated.
Includes settings and values I use more often after daily use since day 1,
workflows that evolved during time, now built around the Fast LORA ⬅️link
▪ BASIC workflows are designed to be as simple as possible.
The bare essentials, which I consider the absolute minimum.
▪ ADVANCED workflows includes a bit more compared to the basic one.
▪ ULTRA workflows are probably more suitable for those who usually don't see much light of day🤣
Legend to help you choose the right workflow:
✔️ Green check = UP TO DATE version for its category.
Include latest settings, tricks, updated nodes and samplers, working on latest ComfyUI.
🟩🟧🟪 Colors = Basic / Advanced / Ultra
❌ = Based on deprecated nodes, you'll have to fix it yourself if you really want to use
Low Vram? Try this:
and/or try use GGUF models avaible here.
Rtx4000? use this:
All the worflows avaible on this page are based on focused on prioritizing timing,
achieving good results AS QUICK AS POSSIBLE.
If you are looking for ultra-high-quality videos and the best this model can deliver
then you may need to raise steps, change resolutions, lowering the TEA CACHE influence or even deactivate the fast lora at all and raise steps over 30.
If that is your case then good luck on waiting times...
Long waiting times make it impossible to truly understand how this model behave and respond to every little settings changes , the loras you swap, sizes and so on.
Is really hard to master the use of any AI models with eternal waiting times..
I settle for acceptable balance between quality and speed, using settings exactly like those shared in these workflows. I might adjust the aspect ratio slightly depending on the scene, but overall, these are the settings I find most convenient.
Before commenting, please keep in mind:
Most issues you might encounter are probably already been widely discussed and solved on Discord, Reddit, GitHub, and addressed in the description corresponding to the workflow you're using, so please..Read carefully..
and consider do some searches before comment.The Advanced and Ultra workflows are intended for more experienced ComfyUI users. If you choose to install unfamiliar nodes, you take full responsibility.
I do this for fun, randomly in my free time. No one is paying me for it so please lower expectations.
I started this alone, but now there's a small Discord server of people who are contributing with their passion, experiments and cool findings..
so thanks to their contributions this project continues to grow and improve for everyone's benefit.
Here the link if you want to join and share your ideas or contribute to improve workflows for the good of the community.
WORKFLOWS DESCRIPTION:
🟩"BASIC All In One"
use native comfy nodes, it has 3 method to operate:
T2V
I2V (sort of, an image is multiplied *x frames and sent to latent, with a denoising level balanced to preserve the structure, composition, and colors of the original image. I find this approach highly useful as it saves both inference time and allows for better guidance toward the desired result). Obviously this comes at the expense of general motion, as lowering the denoise level too much causes the final result to become static and have minimal movement. The denoise threshold is up to you to decide based on your needs.
There are other methods to achieve a more accurate image-to-video process, but they are slow. I didn’t even included a negative prompt in the workflow because it doubles the waiting times.
V2V same concept as I2V above
require: https://github.com/chengzeyi/Comfy-WaveSpeed
🟧 "ADVANCED All In One TEA ☕"
an improved version of the BASIC All In One TEA ☕, with additional methods to upscale faster, plus a lightweight captioning system for I2V and V2V, that consume only additional 100mb vram.
Upscaling can be done in three ways:
Upscaling using the model. Best Quality. Slower (Refine is optional)
Upscale Classic + Refine. It uses a special video upscaling model that I selected from a crazy amount of multiple video upscaling models and tests, it is one of the fastest and allows for results with good contrast and well-defined lines. While it’s certainly not the optimal choice when used alone but when combined with the REFINE step, it produces well-defined videos. This option is a middle ground in terms of timing between the first and third method.
Latent upscale + Refine. This is my favorite. fastest. decent.
This method is nothing more than the same as the first, wich is basically V2V, but at slightly lower steps and denoise.
Three different methods, more choices based on preferences.
Requirements:
-ClipVitLargePatch14
download model.safetensors
rename it as clip-vit-large-patch14_OPENAI.safetensors"
paste it in \models\clip
paste it in \models\ESRGAN\
Update Changelogs:
|1.1|
Faster upscaling
Better settings
|1.2|
removed redundancies, better logic
some error fixed
added extra box for the ability to load a video and directly upscale it
|1.3|
New prompting system.
Now you can copy and paste any prompt you find online and this will automatically modify the words you don't like and/or add additional random words.
Fixed some latent auto switches bugs (this gave me serious headhaces)
Fixed seed issue, now locking seed will lock sampling
Some Ui cleaning
|1.4|
Random Character LoRA Picker, an extra lora group dedicated to characters.
Fill the pink group with 5 character LoRAs you like, and a random LoRA will be selected.
(You can toggle this option on/off if you don't want to fill everything.)
You can see which lora was used in the dedicated window that shows the corresponding number.
Batch Video Processing – Huge Time Saver!
You can now generate videos at the bare minimum quality and later queue them all for upscaling, refining, or interpolating in a single step.
Just point it to the folder where the videos are saved, and the process will be done automatically.
Added Seed Picker for Each Stage (Upscale/Refine)
You can now, for example, lock the seed during the initial generation, then randomize the seed for the upscale or refine stage.
More Room for Video Previews
No more overlapping nodes when generating tall videos (don't exagerate with ratio obviously)
Expanded Space for Sampler Previews
Enable preview methods in the manager to watch the generation progress in real time.
This allows you to interrupt the process if you don't like where it's going.
(I usually keep previews off, as enabling them takes slightly longer, but they can be helpful in some cases.)
Improved UI
Cleaned up some connections (noodles), removed redundancies, and enhanced overall efficiency.
All essential nodes are highlighted in blue and emphasized right below each corresponding video node, while everything else (backend) like switches, logic, mathematics, and things you shouldn't touch have been moved further down. You can now change settings or replace nodes with those you prefer way more easily.
Notifications
All nodes related to the browser notifications sent when each step is completed, which some people find annoying, have been moved to the very bottom and highlighted in gray. So, if they bother you, you can quickly find them, select them, and delete them
NB:
This two errors in console are completly fine. Just don't mind at those.
WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'
🟪 "AIO | ULTRA ☕"
Embrace This Beast of Dystopian Mass Video Production!
This version is for the truly brave professionals and unlocks a lot of possibilities.
Plus, it includes settings for higher quality, sharper videos, and even faster speed, all while being nearly glitch-free.
All older workflows have also been updated to minimize glitches, as explained in my previous article.
We are achieving world-record speed here, but at the cost of some complexity. These workflows are becoming increasingly intimidating despite efforts to keep them clean and hide all automations in the back-end as much as possible.
That's why I call this workflow ULTRA: a powerhouse for tenacious Hunyuan users who want to achieve the best results in the shortest time possible, with all tools at their fingertips
Key Features and Improvements:
Handy Console: Includes buttons to activate stages with no need to connect cables or navigate elsewhere. Everything is centralized in one place (Control Room), and functions can be accessed with ease.
T2V, I2V*,V2V, T2I, I2I Support: Seamless transitions between different workflows.
*I2V: an image is multiplied into *x frames and sent to latent. Official I2V model is not out yet. There's a temprorary trick to do I2V here wich require Kijai's nodes.
Wildcards + Custom Prompting Options: Switch between Classic prompting with wildcards or add random words in a dedicated box, with automatic customizable word swapping or censoring.
Video Loading: Load videos directly into upscalers/refiners and skip the initial inference stage.
Batch Video Processing: Upscale or Refine multiple videos in sequence by loading them from a custom folder.
Interpolation: Smooth frame transitions for enhanced video quality.
Random Character LoRA Picker: Includes 9 LoRA nodes in addition to fixed LoRA loaders.
Upscaling Options: Supports upscaling, double upscaling, and downscaling processes.
Notifications: Receive notifications for each completed stage, organized in a separate section for easy removal if necessary.
Lightweight Captioning: Enables captioning for I2V and V2V with minimal additional VRAM usage (only 100MB).
- Sequential Upscale Under 1x / Double Upscaling
You can now downscale using the upscale process and then re-upscale with the refiner, or customize upscaler multipliers to upscale 2 times.
New Functionality:
The upscale value range now includes values as low as 0.5.
Two sliders are available: one for the initial upscale and another for the refiner (essentially another sampler, always V2V).
Applications:
Upscale, Refine or combine the twoUpscale fast (latent resize + sampler) or accurate (resize + sampler)
Refine (works the same as upscale, can be used alone or as an auxiliary upscaler)
Double upscaling: Start small and upscale significantly in the final stage.
Downscale and re-upscale: Deconstruct at lower resolution and reconstruct at higher quality.
Combos: Upscale & Refine / Downscale & Upscale
- Skip Decoders/Encoders Option
Save significant time by skipping raw decoding for each desired stage and going directly to the final result.
How It Works: If your prompt is likely to produce a good output and the preview method ("latent2RGB") is active in the manager, you can monitor the process in real-time. Skip encoding/decoding by working exclusively in the latent space, generating and sending latent data directly to the upscaler until the process completes.
Example:
A typical medium/high-quality generation might involve:Resolution: ~ 432x320
Frames: 65
One Upscale: 1.5x (to 640x480)
Total Time: 162 seconds
In this example case, by activating the preview in the manager and skipping the first decoder (the preview before upscaling), you can save ~30 seconds. The process now takes 133 seconds instead of 162.
Bypassing additional decoders (e.g., upscale further or refinement) can save even more time.
-Dynamic and Classic Prompting + Wildcards
You can use classic prompting with wildcard support, or activate Random Lines, which are added before and after the main prompt. Random Lines are essentially lines of text where each line break represents a new entry. This allows you to create wildcards on the fly directly within the console, without needing to create a new text file, as is required with standard wildcards, or restart Comfy to use them.
This feature is especially useful when combined with the module that finds and replaces words with other predefined ones (which is always active): for example, if you want to copy and paste a prompt found elsewhere but still wish to use your own style, characters, or specific words (which would otherwise require wildcards), using Random Lines allows you to run more tests in less time, without needing to manually edit or customize the prompt you found at all. Just paste it and run:
- Image Generation (T2I and I2I)
Explore HUN latent space with this image generation capabilities.
When the number of frames is set to 1, the image node activates automatically, allowing the image to be saved as a PNG.
Use the settings shown here for the best results:
T2I Example Gallery: Hunyuan Showcase
- Structural Changes / Additional Features
Motion Guider for I2V
This feature enhances motion for image-to-video workflows, lowering chances to get a static video as result.
9 Random Character Loras Loader: Previously limited to 5, now expanded to 9.
Random Character Lora Lock On/Off:
By default, each seed is set to corresponds to a random Lora
(e.g., seed n° 667 = Lora n° 7).Now, you can unlock this "character Lora lock on seed" and regenerate the same video with a different random Lora while maintaining the main seed.
Clarifications:
Let’s call things by their real names:"Refine" and "Upscale" are both samplers here. Each optimized for specific stages:
Upscale: Higher steps/denoise, fast results, balanced quality.
Refine: Lower steps/denoise, focused on fixing issues and enhancing details.
Refine can work alone, without upscaling, to address small issues or improve fine details.
UI Simplification:
The "classic upscale" is now replaced by a faster and better-performing resize + sharpness operation and hidden in back-end to save space.Frame Limit Issue (101+ Frames):
Generating more than 101 frames with latent upscale can cause problems. To address this, I added an option to upscale videos before switching to latent processing.
- Bug Fixes
Latent Upscale Change:
Latent upscaling now uses bicubic interpolation instead of nearest-exact, which performs better based on testing.
"Cliption" Bug Fixed
201-Frame Fix:
Generating 201-frame perfect loops caused artifacts with latent upscale. Switching to "resize" via the pink console buttons now resolves this issue.
- Performance and other infos:
Once you master it, you won’t want to go back. This workflow is designed to meet every need and handle every case, minimizing the need to move around the board too much. Everything is controlled from a central "Control Room."
Traditionally, managing these functions would require connecting/disconnecting cables or loading various workflows. Here, however, everything is automated and executed with just a few button presses.
Default settings (e.g., denoise, steps, resolution) are optimized for simplicity, but advanced users can easily adjust them to suit their needs.
-Limitations:
No Audio Integration:
While I have an audio-capable workflow, it doesn’t make sense here. Audio should be processed separately for professional results.No Post-Production Effects:
Effects like color correction, filmic grain, and other post-production enhancements are left to dedicated editing software or workflows. This workflow focuses on delivering a pure video product.Interpolation Considerations:
Interpolation is included here. I set up the fastest i could find around, not necessary the best one. For best results, I typically use Topaz for both extra upscaling and interpolation after processing but is up to the user to choose whatever favourite interpolation method or final upscaling if needed.
Requirements:
ULTRA 1.3:
-UPDATE TO LATEST COMFY IS NEEDED!
-Wave Speed
ULTRA 1.2:
-Tea cache
-ClipVitLargePatch14
NB:
This two errors in console are completly fine. Just don't mind at those.
WARNING: DreamBigImageSwitch.IS_CHANGED() got an unexpected keyword argument 'select'
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'
Update Changelogs:
|1.1|
Better color scheme to easily understand how upscaling stages works
Check images to understand
|1.2|
Wildcards.
You can now switch from Classic Prompting system (with wildcards allowed)
to the fancy one previously avaible
|1.3|
An extra wavespeed boost kicks in for upscalers.
Changed samplers to native Comfy—no more TTP, no more interrup error messg
Tea cache is now a separate node.
Fixed a notification timing error and text again.
Replaced a node that was causing errors for some users: "if any" now swaps with "eden_comfy_pipelines."
Added SPICE, an extra-fast LoRA toggle that activates only in upscalers to speed up inference at lower steps and reduce noise.
Added Block Cache and Sage to the setup. Users who have them working can enable them.
Changed the default sampler from Euler Beta to the new "gradient_estimation" sampler introduced in the latest Comfy update.
Added a video info box for each stage (size, duration).
Removed "random lines."
Adjusted default values for general use.
Upscale 1 can now function as a refiner as well.
When pressing "Latent Resize" or "Resize," it will automatically activate the correct sampler.
A single-frame image is now displayed in other stages as well (when active).
Thanks to all users that contributed on discord for this workflow improvements!
🟩"BASIC All In One TEA ☕"
an improved version of the one above, with slightly different settings for even faster speed.
Plus toggle to activate upscaling stages.
It's based on tea cache nodes, you need to install it to make it work
🟩 "Hybrid + Negative Prompt"
use TeaCache nodes for upscaling, and comfy native nodes for everything else, with the addition of negative prompt that can be enabled or disabled. I shared this one cause was requested, i don't use much negative prompt because it double the render times. Tea cache nodes are required.
Bonus TIPS:
Here an article with all tips and trick i'm writing as i test this model:
https://civitai.com/articles/9584
if you struggle to use my workflows for any reasons at least you can relate to the article above. You will get a lot of precious quality of life tips to build and improving your hunyuan experience.
All the workflows labeled with an ❌are OLD and highly experimental, those rely on kijai nodes that were released at very early stage of development.
If you want to explore those you need to fix them by yourself, wich should be pretty easy.
Everything I do, I do in my free time for personal enjoyment.
But if you want to contribute, you can support me here
₿tc : bc1q3yg44ajn2ucs35gpy8jkc5dfxcxuuu90f6kszj
so I can consider do intensive tests and comparisons using cloud GPUs instead of stressing my own GPU with slow multi-tests..
..but honestly, there are people who deserve WAY more support than I do, like Kijai.
I’ll leave his link, if you’re feeling generous start with him. Thanks!
Have fun