Mixing and shaking with Unsampler/ Latent interpolation and dual CFG seems to be a great way to really get the most out of any Flux GGUF. Tested with several dev's versions and custom models based on dev, from 2Q to fp16 and it seems to work. Schnell does not work without further adaption, but its not to hard to get tensor versions working.
I've made this as a 3 image grid, where the middle is the assimilation of the Left/Right. The prompt is random/wildcard and will output unique prompts for generation with lots of possible outputs, with both minor and more profound variations - still in the same context of having a car and a model posing beside.
Interesting is how all come together and I get really nice results in a majority of all generations, hence the optional caption that will show how well the model performs and adapt when mixed.
If you are missing the samples/flower wildcard textfile, just replace it with any pattern of choice.