v0.2;
Edit:
Found out transition frame gets duplicated again, fixed.
Added video to mp4 converter workflow if generation fails and you want to convert latest generated merged .mkv file to a smaller file that you can open.
I went ahead and did a few more experiments. Since I kind of liked subgraphs I want them to a be a little more mainstream and known :) So I moved model loading as well as a few other options inside the subgraphs so the community can get their hands dirty and main page looks cleaner. It might be a little more complicated but I've added a few notes to show you around so don't hesitate to take a look.
I've implemented video merging. However since the files are merged with previous ones on each part dynamically the compression artifacts become a problem since first parts get compressed many times. To prevent this I've decided to save those parts with loseless quality ffv1-mkv. There's still a final save node at the end of the workflow to save the final output in h264 mp4 format.
With default settings a 30s generation at 832x480x81 x6 I2V parts generated files take less than 1GB of space in temp folder (which is cleaned every time comfyui restarts) and you can also delete it manually once the generation is complete.
I've also added interpolation node (bypassed by default) if you want to use it for the final output but it takes a while for 6x81 frames.
Another thing I've implemented was saving the last images to outputs folder. As well as implementing a global seed option that works for all samplers just in case if you want to continue an old generation or generate again with difference by some parameter changes.
You might want to clean those from outputs folder every once in a while if you do not need them anymore or disable them manually.
And thanks to the community for feedbacks both on here and reddit, I appreciate it. Would love to see some generations shared here as well :)
For now I might not publish another update in near future unless I implement a feature that is really significant like smoother transition etc.
This workflow is simply an experiment with comfyui's new subnodes.
It works by inputing last frame of a video generation as first frame input to next generation. But instead of huge spagetti of nodes you get a single I2V node which shares same ksampler sub nodes and iterates over and over again. You can think of it like framepack but in comfyui and also you can prompt each generation seperately. I kept the negatives common but that could be made dynamic as well.
I did not implement a stitiching option as I'm doing it in a basic video edit software but that could be added as well.
Edit: I forgot to save the output videos in output folders since I decided to not implement merging them later on. So the files end up in comfyui/temp folder. (this folder cleans up next time your run comfyui)
First video in 0.1 is 834x480, 1 3 3 sampled and took 23 minutes to generate. Second one is 624*368, 1 2 2 sampled and took 13 minutes. Second video was also generated in up to date version which has the fix for last_frame showing twice.
Don't forget to update your comfyui frontend since old versions were quite buggy . It might still have some bugs so be aware. Update command for portable;
.\python_embeded\python.exe -m pip install comfyui_frontend_package --upgrade
Sampling process is 1 + 3 + 3, where first step is no speed lora applied and others are basic high + low samplings with speed lora applied. Everything is pretty much customizable but remember "KSAMPLER SUBNODE IS USED BY ALL MAIN NODES SO CHANGES APPLY TO ALL OF THEM AT ONCE!". Same goes for I2V_latent subnode if you wanna change the output resolution and length of each part.
To extend the generation simply click on one of the I2V subnodes and copy, hit ctrl + shift + v to paste with all connections then you can connect the last image of your previous node into its start_image input. You can also connect load image to the initial I2V start_image input to bypass T2V generation (dont bypass nodes using keyboard shortcut, it might break subnode).
I couldn't get it to work with native models since it kept crashing on my system so everything is implemented as GGUF quantized models. Feel free to change the process, disable patch sage attention and compile model nodes from the model loader subnode but the speed hit is noticable.
On my 4070ti with sage++ and torch compile enabled T2V + 6x I2V (30s total) took about 20-25 mins.
Hope we get even better workflows from the community in the future :)