The model 👑 Game of Thrones is based on the first three episodes of HBO's TV show Game of Thrones. As a fan of the show, I thought it would be interesting to reimagine it with a Stable Diffusion (SD) model. The main goal of the model is to replicate the show's characters with high fidelity. Given the large number of characters, interactions, and scenes it presents, it was quite a challenging endeavor. The images showcased above are the outcomes of the model.
Training included 9k images focused on characters' faces, 50 subjects in total, and 4k images from different scenes. Additionally, 30k images were used as regularization images - medieval-themed images as well as half of the ❤️🔥 Divas dataset. Ultimately, the training was stabilized with 💖 Babes 2.0 model.
Overall, the model's development spanned three weeks, with GPU training on an RTX 4090 taking 3.5 days.
First, I obtained a 4K (3840 x 2160px) version of the first three episodes of the show. 4K images allow for the extraction of relatively small faces from frames that maintain a resolution higher than 768x768px, which is our base training resolution. The aspect ratio doesn't have to be 1:1, as training will automatically scale down the images to fit the target training area.
To obtain images from the video, I used ffmpeg, extracting four frames from each second of the video using the following command for each episode:
ffmpeg -hwaccel cuda -i "/path_to_source/video_S01E01.mkv" -vf "setpts=N/FRAME_RATE/TB,fps=4,mpdecimate=hi=8960:lo=64:frac=0.33,zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" -pix_fmt yuv420p -q:v 3 "/path_to_target/S01E01_extract/s01_e01_%06d.jpg"
Given that the video source used an HDR format with a unique color profile, the above command ensures a correct color representation for the extracted images. Additionally, the command aims to retrieve only distinct frames. However, using 4K resolution might have affected the extraction of distinct frames.
From this process, we extracted 41k images, which then required further filtration and adaptation for our database.
The primary objective of the training is character training, with a focus on faces. Therefore I had to extract all faces from the initial set of 41k images. In my GitHub repository, there is a script crop_to_face.py that I used to extract all the faces into a separate folder, with a command:
python3 crop_to_face.py --source_folder "/path_to_source/S01E01-03_extract/" --target_folder "/path_to_target/S01E01-03_faces/"
This command ran for a while, eventually producing 13k images with faces.
With images that look like these:
Data preparation presented two primary challenges: dealing with blurry images and effectively classifying face-to-name.
Blurriness in Images: Many images extracted from the TV show displayed varying degrees of blur, which negatively impacts the training process, and possibly forces the model to be able to generate mainly blurry images. I wanted to use an algorithm to automatically filter out and discard these blurry images. My attempt can be seen in the images_filter_blurry.py script where I tried three distinct algorithms to identify and filter out face blur. Unfortunately, my tests on a sample dataset didn't establish a reliable correlation between the blur score from the algorithm and the actual perceptual blurriness upon manual inspection. Attempts at combining these algorithms didn't yield better results. While some articles point to dedicated models trained for blur detection, I wasn't able to acquire such a model for my tests.
Face Classification: the training process requires that all images of a specific individual be stored in a single directory to be able to control the number of images used for training for each subject. I tried to use automating face-to-name classification in sort_images_by_faces.py script. While it had a small success, the high rate of misclassifications meant a manual review became inevitable. Given this, I found it more efficient to manually categorize images in a single directory rather than navigate through 50 separate ones.
In conclusion, the tasks of detecting blur and classifying faces demanded extensive manual oversight and were time-consuming. My workflow involved meticulously reviewing files in a file browser and relocating them to appropriately named folders. Using a file browser with thumbnails for both files and folders was useful. Hopefully, in the future, tools to automate these processes will be available, potentially allowing an easy video-to-SD training pipeline.
Besides training faces, I wanted the model to be familiar with outfits and scenes. To achieve this, I used a subset of the frames extracted initially, without cropping them. Using the move_random_files.py script on the 41k images from the initial extraction, to move 5k random images as the foundation for scenes. I manually filtered these selected images during the captioning stage.
For faces, they were already separated into folders - folder names were used as the first tag in captions. I used my script with a graphical component to add additional names, when images included another face.
For scenes, I filtered out blurry images and added captions noting all the people present in every scene. My script allows users to use labels, which are basically tags with names that are added/removed with a graphical interface. I filtered around 1k blurry and bad images, and captioned all names in the images.
The tag "game of thrones" was added to all captions, which can be used as a style in the prompt.
Then I used a WebUI extension with the WD14 tagger to append the rest of the captions automatically.
I also added a few thousand regularization images, mainly medieval-themed and nature-only images. There are scripts in my repository that can help to obtain such images. These images were captioned automatically.
Images dedicated to validation should be placed in a separate folder. They are not used for training but are vital to monitor that the training process is genuinely learning and not merely overfitting on the dataset. I set aside 20 random images from the faces of the 10 subjects with the most images.
The TV show, and consequently the dataset, leans towards darker images. To address this, I used the script images_auto_adjust_exposure_and_contrast.py. This script allowed me to generate four variations of each original image by randomly tweaking exposure and contrast values. As a result, this script quadrupled the dataset's size (excluding the regularization images). To maintain a manageable dataset size, I first downsized the images using the images_downsize_and_filter_small.py script. After this, I duplicated the caption files to match the new image variations.
Example of varying saturation and contrast for a single image:
I rely on a remote server for training, which unfortunately has a slow direct connection. To overcome this problem, I use huggingface, which enables much faster uploads and downloads of large files. I zipped the dataset and then uploaded the file to a private huggingface repository, making it ready for training.
I'm using the EveryDream2 trainer, which runs on a remote server from vast.ai. For this model, I've exclusively used RTX 4090 GPUs. Although there are numerous settings in the training process that can be adjusted, I'll only mention a few most important settings: the Unet learning rate 7e-7, Text Encoder (TE) learning rate 5e-8, and for the scheduler, pulsing cosine with a 0.5-2 epochs cycle. I also enabled the tag shuffling option.
The base model for this training was a custom merge with ❤️🔥 Divas training. While you can choose any base model for your training, it's generally advisable to select a base model that closely aligns with your training dataset or a custom mix that incorporates some desired features.
In this training, I wanted to test the theory suggesting it's more effective for the TE to be pre-trained initially, and for the Unet to be trained later with frozen and pre-trained TE.
Purpose: TE pre-training.
Starting checkpoint: base checkpoint.
Training: training TE and Unet for 140 epochs.
Training focus: GOT faces + scenes.
Multipliers: GOT subjects with a significant number of images - trained 40 images per subject per epoch, subjects with fewer images - 8/4 images per subject per epoch.
Purpose: Unet core training.
Starting checkpoint: TE from stage 1, and Unet from the base model.
Training: training Unet while TE is frozen - for 80 epochs.
Training focus: GOT faces + scenes.
Multipliers: GOT subjects with a significant number of images - trained 30 images per subject per epoch, subjects with fewer images - 8/4 images per subject per epoch.
Purpose: Unet normalization.
Starting checkpoint: the last checkpoint from stage 2.
Training: training Unet while TE is frozen - for 200 epochs.
Training focus: The dataset was expanded to include half of ❤️🔥 Divas dataset. The primary focus was on the ❤️🔥 Divas dataset while also giving some attention to the preservation of GOT faces and scenes.
Multipliers: GOT subjects with a significant number of images - trained 8 images per subject per epoch, subjects with fewer images - 4/2 images per subject per epoch.
Relying solely on the last checkpoint from training isn't always optimal due to the issues of input noise and biases introduced during the training process. To mitigate these problems, I merged various epochs from each training stage with the 💖 Babes 2.0 model, which served as the mixing core.
I utilized my model evaluation test to assess various merge combinations, aiming to determine the most effective merge ratios. This step is exploratory and requires the creation and assessment of multiple merge ratios to optimize traits in the final model.
I'm uncertain if the training strategy I implemented is the best approach. My goal was to test a pre-trained TE strategy, but it remains unclear whether it's superior or inferior to the combined TE+Unet training. Moving forward, I plan to start with a TE+Unet training phase and subsequently freeze the TE while continuing Unet training - without disregarding the Unet progress from the initial phase.
Darkness - Even with my efforts to counter the dataset's dark bias by introducing random saturation, generated characters often appear slightly too dark. Using "game of thrones" in the prompt often results in darker images. However, using "game of thrones" in a negative prompt tends to produce brighter images. Training with more episodes might lessen this dark bias, but this remains to be verified.
Automation Goal - I aspire to fully automate the entire process of converting video to an SD model. However, challenges like blurriness and the absence of a reliable face-to-name classification make it currently infeasible. The need for manual filtering and captioning makes the process both lengthy and labor-intensive. I'm optimistic that future advancements will allow for a more streamlined video-to-SD-model conversion. This would potentially speed up the creation of fast and high-quality fan fiction, visual novels, concept art, and, given advancements in image-to-video technology, even aid in creating videos, music clips, short films, and movies.
I have been developing and training SD models for the last 10 months.