Sign In

alimama-creative / FLUX.1-Turbo-Alpha

34
1.3k
81
7
Updated: Oct 26, 2024
base model
Verified:
SafeTensor
Type
LoRA
Stats
1,338
81
Reviews
Published
Oct 26, 2024
Base Model
Flux.1 D
Hash
AutoV2
77F7523A5E
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

中文版Readme

This repository provides a 8-step distilled lora for FLUX.1-dev model released by AlimamaCreative Team.

Description

This checkpoint is a 8-step distilled Lora, trained based on FLUX.1-dev model. We use a multi-head discriminator to improve the distill quality. Our model can be used for T2I, inpainting controlnet and other FLUX related models. The recommended guidance_scale=3.5 and lora_scale=1. Our Lower steps version will release later.

  • Text-to-Image.

How to use

diffusers

This model can be used ditrectly with diffusers

import torch
from diffusers.pipelines import FluxPipeline

model_id = "black-forest-labs/FLUX.1-dev"
adapter_id = "alimama-creative/FLUX.1-Turbo-Alpha"

pipe = FluxPipeline.from_pretrained(
  model_id,
  torch_dtype=torch.bfloat16
)
pipe.to("cuda")

pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()

prompt = "A DSLR photo of a shiny VW van that has a cityscape painted on it. A smiling sloth stands on grass in front of the van and is wearing a leather jacket, a cowboy hat, a kilt and a bowtie. The sloth is holding a quarterstaff and a big book."
image = pipe(
            prompt=prompt,
            guidance_scale=3.5,
            height=1024,
            width=1024,
            num_inference_steps=8,
            max_sequence_length=512).images[0]

comfyui

Training Details

The model is trained on 1M open source and internal sources images, with the aesthetic 6.3+ and resolution greater than 800. We use adversarial training to improve the quality. Our method fix the original FLUX.1-dev transformer as the discriminator backbone, and add multi heads to every transformer layer. We fix the guidance scale as 3.5 during training, and use the time shift as 3.

Mixed precision: bf16

Learning rate: 2e-5

Batch size: 64

Image size: 1024x1024