Type | Workflows |
Stats | 7,094 0 |
Reviews | (406) |
Published | Mar 5, 2024 |
Base Model | |
Hash | AutoV2 02A518BA19 |




Simple workflow to animate a still image with IP adapter.
Using Topaz Video AI to upscale all my videos.
Models used:
AnimateLCM_sd15_t2v.ckpt
https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v.ckpt
AnimateLCM_sd15_t2v_lora.safetensors
https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v_lora.safetensors
IP-Adapter
https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus_sd15.bin
Clip Vision
https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/model.safetensors
SHATTER Motion LoRA
https://civitai.com/models/312519
Photon LCM
https://civitai.com/models/306814
Credit to Machine Delusions for the initial LCM workflow that spawned this & Cerspense for dialing in the settings over the past few weeks.
--
v2 - updated to latest controlnets
v3 - updated broken node
Suggested Resources
Discussion
Hi! Does anybody know why the HighRes-Fix node isn't working anymore?
Is someone having the same problem and know a fix?
HighRes-Fix Script 14:
- Value -1 smaller than min of 0: seed
- Value not in list: pixel_upscaler: 'ESRGAN\1x-AnimeUndeint-Compact.pth' not in ['4x_NMKD-Siax_200k.pth']
- Value not in list: control_net_name: 'Control nets\DensePose.safetensors' not in ['control_v1p_sd15_qrcode_monster.s
It gives some really nice results, is it possible to increase the lenght of the video? the "frames" node only increases the speed and the higher the amount is the weirder the video becomes, lots of artifacts. And when i set it to more than 100 my memory runs out, i'm using a 4090. Any help would be greatly appreciated as i'm completely new to comfyui xd
When loading the graph, the following node types were not found:
Primitive integer [Crystools]
Nodes that have failed to load will show as red on the graph.
Could you please advise?
If you're going to go through the trouble of creating and sharing the workflow would it really be so much trouble to properly document what folders the models need to go into? Specifically the fact that the LORA needs to go into \custom_nodes\ComfyUI-AnimateDiff-Evolved\models - you're the author of the linked motion LORA as well, and you don't even note it in that description.
hi how are u? tnx for this module one question?
when i run this model i got this error
When it reaches" node hires "it gives this error
got prompt Failed to validate prompt for output 18: * HighRes-Fix Script 14: - Value not in list: preprocessor: 'CannyEdgePreprocessor' not in ['_'] - Value not in list: control_net_name: 'Control nets\DensePose.safetensors' not in [] - Value not in list: use_controlnet: 'False' not in ['_'] - Value -1 smaller than min of 0: seed - Value not in list: pixel_upscaler: 'ESRGAN\1x-AnimeUndeint-Compact.pth' not in ['ESRGAN_4x.pth'] Output will be ignored Failed to validate prompt for output 10: Output will be ignored Prompt executed in 0.03 seconds
getting bellow error i tried alot but not working can any one help me ..
[AnimateDiffEvo] - INFO - Loading motion module AnimateLCM_sd15_t2v.ckpt
[AnimateDiffEvo] - INFO - Loading motion LoRA pxlpshr_shatter_400.safetensors
!!! Exception during processing !!! Error while deserializing header: HeaderTooLarge
Traceback (most recent call last):
File "/home/ubuntu/ai/ComfyUI/execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/execution.py", line 192, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "/home/ubuntu/ai/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen1.py", line 146, in load_mm_and_inject_params
motion_model = load_motion_module_gen1(model_name, model, motion_lora=motion_lora, motion_model_settings=motion_model_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py", line 1280, in load_motion_module_gen1
load_motion_lora_as_patches(motion_model, lora)
File "/home/ubuntu/ai/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py", line 1209, in load_motion_lora_as_patches
state_dict = comfy.utils.load_torch_file(lora_path)