Sign In

PhotoPedia XL

1.3k
13k
515k
263
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
8,099
238,186
Reviews
Published
Dec 11, 2023
Base Model
SDXL 1.0
Hash
AutoV2
2A52D86F4A
default creator card background decoration
Holiday 2023: Golden Donor
C3RBERUS


PhotoPedia XL 📷


Press on ❤️ to follow / consider leaving a review on model review. Feedback is appreciated!

Read the info below to get the high quality images (click on show more)⬇

7 / January / 23 - Still working on a dataset to fine-tune the model. Currently sitting on 700 ish photos, a lot more planned to add.

❗ Update - 11 / December / 23

  • 4.5 is out. Very happy how it turned out!

  • MBW'd the living hell out of this one. Must have went back and forth through more than 30 merges to get here.

    • You can right click the images and open them in new tab and zoom in quite a lot to check smaller details.

  • Addressing certain issues such as hands / eyes. Eyes are much better. Hands are better as well. Generations can still produce some nasty monstrosities though

  • Slight overall feel change

    • Edited the model info

      • Merge recipe further down the page

      • Updating generation parameters and suggestions for 4.5

      • Will post links to suggested resources soon + more

    • Will be posting more galleries soon, keep an eye out


📣 GENERATION PARAMETERS & SUGGESTIONS ( As of 4.5 )

ℹ️ 4.5 generations were made with the following settings

H/W: 1024/1024 - 1024/576 - 576/1024 - I tend to stick with 1024/1024
Sampler - DPM++ 2S a Karras is the main sampler I used for 4.5
Steps - 40 - Can go lower
CFG - 4-9 - I went lower than usual, usually 4-5

Generate an image with Hires enabled

Hires upscale: 1.5 
Hires steps: 15-20
Hires denoising: ~ 0.6
Upscaler: 4xLSDIRplusC

Send to Extras. Upscale once by 1.5. Send the upscaled image to Extras again, 1.5 upscale again. I usually run this in batches.

Upscale 1: 4x Foolhardy Remacri - 1.5 upscale in Extras tab
Upscale 2: 4x NKMD Superscale

I have not used ADetailer, negative embeddings or loras.

ℹ️ MAIN PROMPT

photoshoot, photo studio, RAW photo, editorial photograph, film stock photograph, cinematic, posing, amateur photo, analog, raw, f2, 35mm, an (amateur photo), flash photography, taken on an old camera, polaroid, 8k, highly detailed, (high quality, best quality:1.3), Extremely high-resolution, film grain, Kiki from Netherlands, long (waist-length:1.2) combed auburn hair, ((macro lens)), dark atmosphere, ((focused on eyes)), feminine expressions, photography, dslr, 35mm, Fujifilm Superia Premium 400, Nikon D850 film stock photograph, Kodak Portra 400 f1.6 lens, 8k, UHD

ℹ️ NEGATIVE PROMPT

(worst quality, low quality:1.4), illustration, 3d, 2d, painting, cartoons, sketch, blur, blurry, blurred, bokeh, unclear, grainy, low resolution, downsampling, aliasing, dithering, distorted, jpeg artifacts, compression artifacts, overexposed, high-contrast, bad-contrast, poorly drawn, cropped, out of frame, [deformed, disfigured, suspenders], ((deformed iris, deformed pupils:1.4)), (physically-abnormal:2), (joint-inequalities:2), (limbs-inequalities:2), deformed, disfigured, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, dull eyes, mismatched eyes, bad anatomy, signature, watermark, artist name, text, error

Try to run a generation without any negatives first. I am personally too lazy to test each one of them and their impact on generations and the current prompt did just fine x)

Try to run a generation with very simple prompts before you start adding more

ℹ️ I did not use any negative embeddings or loras to change the outputs. Should you use them? As I always say, feel free to experiment, I am pretty sure that some of the negative embeddings will work very well with the model. I generally try not to use too any.

ℹ️ INPAINTING

I almost never bother with it. I am happy with most of the generations as they are (well, discarding the bad stuff that comes along from time to time), unless I really love the image and want to fix something wrong with it.

ℹ️ ON-SITE GENERATIONS

I have not spent much time testing it out. No suggestions as of now. I run SD locally.

MY A1111

•  version: v1.6.0  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2

MY HARDWARE

•  13th Gen Intel(R) Core(TM) i7-13700KF / 24 cores / 5.4 GHz

•  NVIDIA GeForce RTX 4070 / 12 GB

•  RAM 32 GB / 5200 MHz


Further plans

Updating


ℹ️ License & Use

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.

  • 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content.

  • 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license.

  • 3. You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully).

You agree not to use the Model or Derivatives of the Model:

  • In any way that violates any applicable national, federal, state, local or international law or regulation

  • For the purpose of exploiting, harming or attempting to exploit or harm minors in any way

  • To generate or disseminate verifiably false information and/or content with the purpose of harming others

  • To generate or disseminate personal identifiable information that can be used to harm an individual

  • To defame, disparage or otherwise harass others

  • For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation

  • For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics

  • To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm

  • For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories

  • To provide medical advice and medical results interpretation

  • To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).

You are solely responsible for any legal liability resulting from unethical use of this model(s)


OLDER INFO BELLOW


💭 Although the end goal of the merges is to reach a certain level of realism, I do recommend you to try all the versions provided, they have slight differences, but I guess certain versions might be more appealing to your eye. I personally cannot decide on which is my favorite so far, they all have their uniqueness.


📣 GENERATION PARAMETERS & SUGGESTIONS (used in v4 and older)

ℹ️ I ran most of the generations with the following settings

Sampler - I used many. Try running DPM++ 2S a Karras
Steps - 65 - Can go lower / higher, depending on your sampler.
CFG - 9 - You probably might want to go lower on this.
Hires upscale: 1.5 
Hires steps: 15-20
Hires denoising: ~ 0.5
Upscaler - ESRGAN 4x / 4xUltraSharp

I have not used ADetailer so far with this merge, having some issues with it. I do recommend it though. 

I didn't inpaint generated images.

I didnt use any negative embeddings. I suggest testing on your own to see which one you like the most as some might change the image more than others.

I used Detail Tweaker LoRA usually around 0.3.

Upscale in A1111 in Extras

OR

Upscale outside A1111 - Using Waifu2x GUI 

ℹ️ Prompt Example

amateur photo of a --subject-- --doing something-- / --wearing something--, analogue, raw, polaroid, looking at camera, (from side:1.2), detailed skin texture, natural skin texture, perfect eyes, perfect iris, high detail eyes, detailed iris, detailed cloth texture, detailed facial features, aesthetic, depth of field, cinematic light,(high quality, best quality:1.3), Extremely high-resolution, Kodak Portra 400 camera f1.6 lens, Fujifilm Superia Premium 400, Nikon D850, <lora:add_detail:0.33>


📌 Merge info / recipe

1️⃣ Type 1 ✔️First merge

2️⃣ Type 2 ✔️Fine blend between digital / realistic

  • RealCartonXL + RealVisionXL = M21 ( Sum Twice / Normal / MBW )

  • LEOHello + XXMixXL = M22 ( Sum Twice / Normal / MBW )

  • PhotoPedia XL + M21 + M22 = M23 ( Sum Twice / Normal / MBW )

  • M23 + M21 + M22 = PhotoPedia XL Type 2 ( 3 Sum / Normal / MBW )

3️⃣ Type 3 ✔️ Increased realism / Slightly more NSFW

  • PhotoPedia 1 + XXMixXL + PhotoPedia 2 = PP1 ( 3 Sum / Normal / 0.45A - 0.27B )

  • PhotoPedia 2 + PP1 + RealVisionXL = PP2 ( Sum Twice / Normal / 0.57A - 0.32B )

  • PhotoPedia 2 + LEOHello + RealVisionXL = PP3 ( Sum Twice / Normal / MBW )

  • PhotoPedia 2 + Devil + JuggXL5 = PhotoPedia XL Type 3 ( Add diff / Normal / 1.0 A )

4️⃣Step 4 - ✔️ Manual MBW

  • PhotoPedia 3 + JuggXL5 = PE1 ( Weight Sum / cosineA / MBW )

    • alpha,beta : ( 0.8, 0.25 )

      • weights_alpha : [ 0.7, 0.8, 0.8, 0.4, 0.2, 0.9, 0.2, 0.4, 0.5, 0.5, 0.9, 0.7, 0.5, 0.6, 0.3, 0.9, 0.2, 0.4, 0.9, 0.8, 0.2, 0.3, 0.4, 0.6, 0.7 ]

  • PE1 + RealVisionXL + PE1 = PE2 ( Sum Twice / cosineA / MBW )

    • alpha,beta : ( 0.8, 0.25 )

      • weights_alpha : [ 0.7, 0.8, 0.8, 0.4, 0.2, 0.9, 0.2, 0.4, 0.5, 0.5, 0.9, 0.7, 0.5, 0.6, 0.3, 0.9, 0.2, 0.4, 0.9, 0.8, 0.2, 0.3, 0.4, 0.6, 0.7 ]

      • weights_beta : [ 0.5, 0.4, 0.6, 0.6, 0.4, 0.4, 0.6, 0.5, 0.4, 0.5, 0.5, 0.6, 0.4, 0.6, 0.6, 0.3, 0.7, 0.6, 0.4, 0.6, 0.4, 0.6, 0.4, 0.4, 0.6, 0.7 ]

  • PE2 + JuggXL5 + PE2 = PE3 ( Sum Twice / cosineA / MBW )

    • alpha,beta : ( 0.8, 0.25 )

      • weights_alpha : [ 0.7, 0.8, 0.8, 0.4, 0.2, 0.9, 0.2, 0.4, 0.5, 0.5, 0.9, 0.7, 0.5, 0.6, 0.3, 0.9, 0.2, 0.4, 0.9, 0.8, 0.2, 0.3, 0.4, 0.6, 0.7 ]

      • weights_beta : [ 0.5, 0.4, 0.6, 0.6, 0.4, 0.4, 0.6, 0.5, 0.4, 0.5, 0.5, 0.6, 0.4, 0.6, 0.6, 0.3, 0.7, 0.6, 0.4, 0.6, 0.4, 0.6, 0.4, 0.4, 0.6, 0.7 ]

  • PE3 + RealVisionXL = PhotoPedia XL Type 4 ( Weight Sum / cosineA / MBW )

    • MBW - RING08_5

      • weights : [ 0.2, 0 ,0, 0, 0.3, 0.3, 0.4, 1, 1, 1, 0.3, 0.4, 0.4, 1, 0.8, 1, 1, 0.4, 0.2, 0.2 ]

  • Initially, I wanted to test AutoMBW for this step, but I decided to leave it for another time, it takes way too long. Instead, I went with manual block editing.

  • If the weights look silly, they probably are x) As long as it improves the overall quality!

5️⃣Step 5 WIP / Unreleased / Will get to it eventually

  • PhotoPedia XL Type 4 + LCM / Turbo ?


6️⃣Step 6 Unreleased / No ETA

Further down the line - possible fine-tuning on own dataset