Updated: Sep 19, 2024
base modelVerified: 9 months ago
Other
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
This Model allows you, to use the FLUX base model in ComfyUI as follows:
Choose between img2img and txt2img generation
Use LLM conditioning (requires ollama and its models / nodes)
Use a modifier for prompts using LLM
You can use many combinations of it.
You can specify different resolutions for img2img and txt2img. The main setting uses a definition of the size in MPixels and the Aspect Ratio, the other values will be generated automatically.
You can specify a custom prompt, which will be used, if LLM is not selected.
You can specify a seed value. Open the seed node in the image input group to change the settings.
The workflow is based on "ComfyUI: Flux with LLM, 5x Upscale (Workflow Tutorial)".
I've optimized it for better readability (at least for me), shorter logic and combined main settings in one group. I left away the upscaling part.
My main motivation to do this model was the learning experience, how to work in ComfyUI.
Links for Models:
Flux.1 [dev]: https://huggingface.co/black-forest-l...
Flux.1 [schnell]: https://huggingface.co/black-forest-l...
t5xxl: https://huggingface.co/comfyanonymous...
ControlAltAI Nodes: https://github.com/gseth/ControlAltAI...
CivtiAI LoRA Used:
https://civitai.com/models/562866?mod...
https://civitai.com/models/633553?mod...
Ollama:
llama3.1: https://ollama.com/library/llama3.1
llava-llama3: https://ollama.com/library/llava-llama3
llava (alternate vision model): https://ollama.com/library/llava