UPDATE - June 5th, 2025 - Just a quick announcement: v1.2 is under beta testing right now, the UI (user interface) is completely re-designed, and I also added some small changes to image preview during workflow and the chance to save every image generated during the workflow. I will also write a new description to introduce and explain the workflow. So... stay tuned (not sure yet if I will be able to upload the new WF next weekend)
Chroma is a new 8.9B parameter model, still being developed, based on Flux.1 Schnell.
It’s fully Apache 2.0 licensed, ensuring that anyone can use, modify, and build on top of it.
CivitAI link: https://civitai.com/models/1330309/chroma
Like my HiDream workflow, this will let you work with:
- txt2img or img2img,
-Detail-Daemon,
-Inpaint,
-HiRes-Fix,
-Ultimate SD Upscale,
-FaceDetailer.
The model is still being trained, so there are many updated versions (latest today, May 15th, is the v29.5). Here are all the versions: https://huggingface.co/lodestones/Chroma/tree/main
In brief, this model is:
Training on a 5M dataset, curated from 20M samples including anime, furry, artistic stuff, and photos.
Fully uncensored, reintroducing missing anatomical concepts.
Built as a reliable open-source option for those who need it.
Being based on Flux.1 Schnell, it should run on low-Vram GPUs, so you can use it locally very easily.
You will need one of the t5xxl text encoder model files that you can find in: this repo, fp16 is recommended, if you don’t have that much memory fp8_scaled are recommended. Put it in the ComfyUI/models/text_encoders/ folder. The VAE is the same as FLUX or HiDream, so you should already have it.