santa hat
deerdeer nosedeer glow
Sign In

Dwayne Johnson aka The Rock FLUX Dev Fine-Tuning / DreamBooth Model for Educational and Research Purposes - Dwayne Johnson aka The Rock FLUX Dev LoRA Model for Educational and Research Purposes - Full Tutorial

19
86
7
Updated: Nov 5, 2024
celebrity
Type
Checkpoint Trained
Stats
54
Reviews
Published
Nov 2, 2024
Base Model
Flux.1 D
Training
Steps: 4,760
Epochs: 170
Trigger Words
ohwx man
Hash
AutoV2
7CBFAC7158
default creator card background decoration
Forge Badge
SECourses's Avatar
SECourses
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

I am sharing how I trained this model with full details and even the dataset: please read entire post very carefully.

This model is purely trained for educational and research purposes only for SFW and ethical image generation.

The workflow and the config used in this tutorial can be used to train clothing, items, animals, pets, objects, styles, simply anything.

The uploaded images have SwarmUI metadata and can be re-generated exactly. For generations FP16 model used but FP8 should yield almost same quality. Don't forget to have used yolo face masking model in prompts.

How To Use

Download model into diffusion_models of the SwarmUI. Then you need to use Clip-L and T5-XXL models as well. I recommend T5-XXL FP16 or Scaled FP8 version.

A newest fully public tutorial here for how to use :

I have trained both FLUX LoRA and Fine-Tuning / DreamBooth model.

Activation token / trigger word : ohwx man

Each training was up to 200 epochs and once every 10 epoch checkpoints saved and shared on below Hugging Face Repo : https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline

This model contains experimental results comparing Fine-Tuning / DreamBooth and LoRA training approaches.

Additional Resources

Environment Setup

  • Kohya GUI Version: 021c6f5ae3055320a56967284e759620c349aa56

  • Torch: 2.5.1

  • xFormers: 0.0.28.post3

Dataset Information

  • Resolution: 1024x1024

  • Dataset Size: 28 images

  • Captions: "ohwx man" (nothing else)

  • Activation Token/Trigger Word: "ohwx man"

Fine-Tuning / DreamBooth Experiment

Configuration

  • Config File: 48GB_GPU_28200MB_6.4_second_it_Tier_1.json

  • Training: Up to 200 epochs with consistent config

  • Optimal Result: Epoch 170 (subjective assessment)

Results

LoRA Experiment

Configuration

  • Config File: Rank_1_29500MB_8_85_Second_IT.json

  • Training: Up to 200 epochs

  • Optimal Result: Epoch 160 (subjective assessment)

Results

Comparison Results

Key Observations

  • LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.

  • Fine-Tuning / DreamBooth is better than LoRA as expected.

Model Naming Convention

Fine-Tuning Models

  • Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors

    • 10 epochs

    • 280 steps (28 images × 10 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

  • Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors

    • 20 epochs

    • 560 steps (28 images × 20 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

LoRA Models

  • Dwayne_Johnson_FLUX_LoRA-000010.safetensors

    • 10 epochs

    • 280 steps (28 images × 10 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

  • Dwayne_Johnson_FLUX_LoRA-000020.safetensors

    • 20 epochs

    • 560 steps (28 images × 20 epochs)

    • Batch size: 1

    • Resolution: 1024x1024