Sign In

Scavengers Reign Flux

39
177
39
12
Verified:
SafeTensor
Type
LoRA
Stats
177
39
275
Reviews
Published
Aug 20, 2024
Base Model
Flux.1 D
Training
Steps: 4,000
Usage Tips
Strength: 1
Trigger Words
Flat Art
Hash
AutoV2
C912BC7B61
default creator card background decoration
jerel
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

This model was trained on 209 random screenshots from the TV-show Scavengers Reign (2023) on MAX. I've used GPT-4o for captioning.
I've
I've kept most of the default settings of the 24gb LoRA config except for the steps which I've set to 4.000 steps.

Workflow

  1. I used a short Python script to grab a 1.000 random images from a MP4 file

  2. Then I used czkawka (github) to get rid of any duplicate or similar images

  3. I've made a list of all charactes appearances, removing those that appeared the most often to avoid biases within the model

  4. After that, I checked all the images manually and picked the 209 most aesthetic

  5. I used a custom GPT (scavengers reign GPT) for captioning

  6. Finally I've trained the model with ostris ai-toolkit (github).

Code:

import cv2
import random

mp4_directory = ''
output_directory = ''
frames_to_extract = 120
base_name = "Random_screenshot"
list_of_random_frames = []
frame_distance = 100
first_frame = 0 

count = 0

vidcap = cv2.VideoCapture(mp4_directory)
totalFrames = vidcap.get(cv2.CAP_PROP_FRAME_COUNT)
while count < frames_to_extract:
    count += 1
    count_str = str(count)
    frames_skipped = -1
    while True:
        randomFrameNumber = random.randint(0, totalFrames)
        frames_skipped +=1
        if frames_skipped > 0:
            print(f"Frame Skipped {frames_skipped}")
        if all(abs(randomFrameNumber - frame) > frame_distance and randomFrameNumber> first_frame for frame in list_of_random_frames):
            break
    list_of_random_frames.append(randomFrameNumber)
    photo_output = output_directory + base_name + count_str + ".png"
    vidcap.set(cv2.CAP_PROP_POS_FRAMES,randomFrameNumber)
    success, image = vidcap.read()
    if success:
        cv2.imwrite(photo_output, image)
    print(f"Saving image to: {photo_output}")

PS: If you want the dataset please contact me. I just don't want to get CivitAI in copyright trouble.