Changes in v2: Sorry it's a bit messy, I wanted to update it but I don't have much time to clean it up. Please let me know if there are bugs. I took an axe to a lot of stuff, seems ok to me but there could be some landmines.
Only images for loader. I'm just not using video sources often enough to justify all of the extra logic, just gets in the way. I uploaded a WF that pulls frames in batches, if you're lacking them.
The custom LoRA Manager is used here - I had to squish the whole LoRA section up so you don't need a microscope to view the workflow image. Expanded, the group is as large as the rest of the workflow. You can load in every LoRA you have, and sort them into useful groups to toggle with the fast bypasser. Idiot checks for loaded models and trigger words on the right. Just expand the minimized nodes, selected them all and tell them to arrange themselves nicely.
The model group toggles between two choices- default is regular I2V with a lightning LoRA or Smooth Mix.
Switching between I2V and FLF uses fast bypasser AND button. Eg, I2V button ON and I2V toggled to YES, the opposite for FLF.
I was going to post this as an update to my previous FLF workflow, but it's now quite different. It will now also run regular I2V as well, with the option of continuation. I have tried so many ways of doing FLF, all the ways, with all the models. I keep coming back to plain old 2.2. With lightx2v. You can bridge almost anything. *I don't know why, but my CRT toggle buttons are not showing up in the workflow images. I will upload a screenshot, won't have an embed of course, but you can see what goes in the holes in the images* Colors below correspond roughly to the colors I set to make the buttons unambiguous.
I whittled this down from a much larger setup that also includes 2.1, VACE and FiP. That's way, way too buggy to share so I made it all 2.2 so the model doesn't need to change for any of the operations. I think I snipped all of the loose ends, but there might be vestigial inputs here are there. I don't think they will cause any problems. I ran through every possible permutation of inputs and it's all good on my end. I tried to make everything that has widgets which might need attention brightly colored. I'm not sure how colors work between setups though, probably depends a lot on your custom nodes. Well, at least they can be seen in the WF images. I included JSONs with the different modes enabled. Not that it's ambiguous. Just two clicks. Turn one button on, turn one off. That's it.
You can set any combination of possible inputs. Image to image, last frame of a video to the first for an infinite loop, last to first of a video for a bridge between them, continue from the last frame of your most recent run, etc. You could pull a frame from wherever you like in the video as well, though I didn't make a specific module for it. The filing system is set up such that your source names will be cleaned up automatically and appended where appropriate for keeping track of what goes where. There is one caveat, see below about the VHS upload choice.
To continue an I2V after the first generation, keep the I2V button on and enable the CONTINUE button to the left. It will pull the last saved image out of your FLF directory and run it. After the first load, you can queue up as many generations as you like, you don't have to wait for anything to end. Before the last click, switch to FLF with the END source set as your original starting image. Bear in mind that quality drops at time increases, and this rate is increased as your latent size decreases. Default directory is yourcomfydefaultfolder/FFLF. You can change this in the ORANGE box in the Filename by Function group.
For I2V you obviously only need to load a start image. However, the first time you run the WF, missing inputs in the other spots may cause errors that require a dummy input. For the VHS load nodes (drag and drop), just give it anything. You should be able to give it a tiny png. Anything will suffice, I think. Same with the path loaders (orange).
The sources in the WF are symmetrical, all the switching is the same, the START sources are on the left, the END sources on the right. There are two choices is you switch from Image to Video source, upload or path.
Note that you must load videos TWICE for each group if using VHS upload video node. This is because you can't reference the node to itself to extract the final frame, this would be a circular dependency. The first loader just provides the video info for the second one. Don't confuse the two loaders in one group for start/end loaders. Start and End have their own groups, left and right respectively. PATH doesn't need duplication, as it uses a string that goes to two nodes.
I haven't figured out how to extract the file name from VHS video loader. Very annoying. Therefore you will have to supply names for each end, that is to say the automatic naming system can't grab the source by itself. If you don't care about that, it will just use whatever is in the RED boxes next to the loaders.
All of this is still built around individual generations. There is no way to automate the editing process such that the result is better than manually editing. There's always something you need to trim or ramp to make a presentable composition. I know everyone loves the 3 second porn, but I hate it so much. And it's so easy to make your stuff suck less. Anyway if you just want to merge your outputs, use Shutter Encoder. It's free and it's awesome. And instantaneous, whereas stitching in comfy takes much longer.
Pingpong is turned on in the embedded images because I hate cuts, and I want the videos for this workflow to show up (a ton of my edited uploads just don't get rated so they're invisible to everyone - I don't know why the limit is 4 minutes if everything that I upload that is longer than 1:30 seems to never get rated unless I start a ticket). Whatever, if you see my workflows but no/little media in the gallery, this is why. Also, what does get rated always goes to XXX. Maybe the site sucks less if you pay for it. I'm not going to find out. I'm just killing time.
The last frame saver spits out a tiny dummy image in the FLF directory when running on FLF. I had some issues when trying to bypass it for this mode, so this is a workaround. You can delete all the nodes except for image list and image batch if you can get it to bypass yourself.
There are plenty of notes in the WF as well. Anyone who's even half-familiar with comfy will have no problem using it. Nobody seems to be interested in this stuff, which blows my mind, as FLF is the only thing that makes video generation actually useful, but such is life. Use it or don't. I don't want buzz, I don't want thumbs. I just want people to make something watchable, and upload it to interrupt the stream of meh once in a while. Mine won't post, maybe yours will.
*Stuff I forgot to mention on first post*
1) The light blue node with two 'unsharp' integers in it (which I called desharp in the title box for some reason) applies sharpening to your input images after resize. That can help with slightly blurry inputs, but if left on for continuations can cumulatively add to degradation. I meant to default it to zero, but I think it's actually defaulted to .15. Set them to 0 if it negatively affects the latent input.
2) The green 'set final upscale resolution' node has logic to detect the longest side. It really should only have one input, I forgot to fix that. If you switch your starting resolution to portrait orientation, DO NOT do so for this box. For example if you switch from 1280Wx720H to 720Hx1280W, LEAVE UPSCALE RESOLUTION IN LANDSCAPE (Long W x Short H). The logic compares the decoded sizes to determine the aspect ratio. At least that's how it is supposed to work. I may have made a mistake here. If you have issues, just disconnect the inputs to the final resize node and enter it manually.
3) Underneath the supersampling resize node is a regular v2 resize node as an alternate. This is supposed to be bypassed by default. I don't think it is bypassed in the upload. Comfy logic dictates that the v2 node should not run, as it has no outputs, but check it just to be sure. The supersampling resize does take significantly longer to run than v2, so you can use the latter if you like. Hold down shift and drag the image output dot to v2 to move all of the connections at the same time.

