Sign In

Conceptual Photography

50

710

37

26

Updated: Jul 17, 2025

style

Verified:

SafeTensor

Type

LoRA

Stats

77

25

111

Reviews

Published

Jul 11, 2025

Base Model

Flux.1 D

Training

Steps: 5,330
Epochs: 103

Usage Tips

Clip Skip: 1
Strength: 0.9

Trigger Words

ohwx

Hash

AutoV2
84EA34FBD0

The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

NOTE:

All pics tested & generated on Forge UI,not sure how well this will work in comfy or the on-site generator here.

TRAINED ON REAL PHOTOGRAPHY! EXAMPLES ARE PRESENTED AS NON-CHERRYPICKED GRIDS TO SHOW SEED VARIETY AND GLITCH ODDS :)

These are big loras! I found 64-96 dim to be a good compromise between overriding Flux's default style and keeping a sane lora size. Trained and tested on SFW. These loras are style loras trained on a wide variety of images and are not meant to reproduce any likeness.

NEW!

V6 - GHOST

Though it is a little overtrained/suffers from sameface--this one is the one out of all the trainings that I feel is one of the 'truest' to my personal photography style--so far. More desaturated colors, gritty texture, bounce-flash, tighter shots, lots of variations in angles(relative to default flux), and subtle light trails. Despite being overtrained, it still has prompt adherence, SEED variety and creativity. I personally use Huen/Beta for these since it gives a more textured/gritty feel to the images which I think works well for the style--but Euler/Simple will also work (Huen/Beta will add to generation times considerably). Best settings-- 5 CFG, 896x1344, Huen/Beta. Trained at 96 dim and 128 alpha, this is 2 different epochs from the same training merged in kohya. Fingers, limbs and faces may glitch out, especially at high strengths. I have presented most images as-is, if I have fixed them with inpainting it will be noted! For inpainting i use the VERY EXCELLENT Juggernaut XL inpaint model. This like all my other loras are tested on vanilla Flux dev fp8--I haven't found any Flux finetunes that these work on, just use vanilla.


V2.1 (Angular v2)

I've been kind of obsessed with making a lora *using only my own pics* that will do ANGLES well, since flux tends to make everything so straight-on. This is my best one so far, better than the first angular. It's unique and has good seed variety. Since this one hit a little early in training, I run it at 1.2 strength--this gives a tendency to glitch out in 1 of 4 seeds, but the other ones turn out great. You always pay a little price for fighting against flux biases. It tends to do HIGH ANGLES better than low angles but will also do low angles as well.

V5.1 (junktopia v2)

All tested on Forge at 5 CFG or thereabouts and Euler/Simple

After going through literally HUNDREDS of epochs for another lora and testing and getting an idea of finding the training 'sweet spot', I decided to come back and look at this one and have determined that the first Junktopia is ... a bit overtrained!

Like I said--i'm learning.

Junktopia v2 is from the same training as Junktopia but is right in that spot between learning and overtraining which I feel like i've gotten better at recognizing since the first version.

All the same settings and trigger words apply. It's trained at a high dim, which is the only way i've found can wash away the 'flux'-iness--the plastic skin, extreme bokeh etc, so the file size is pretty large.

I am keeping Junktopia v1 up for posterity--I feel like it has more 'knowledge' and detail than v2, it just has a tendency toward 'sameness' and lacks the SEED VARIETY and creativity of v2.

V5 (junktopia)

-Strength: 0.8-1.0, strength 1 will be more creative but your odds of glitches increases

-Distilled CFG: 0.5-10 , regular CFG 1-5 , use euler/simple for fast, or dpm2/beta and huen/beta for quality, YOU CAN USE REAL CFG WITH THIS LORA. I keep it around 1.5-3 and distilled from 1-10, depending on how much contrast you want. Be warned though, using real cfg will slow gens down significantly.

TRIGGER WORD: ohwx

-Trained on the same dataset as v4, not necessarily 'better' than v4, just different

-Better at darker scenes than v4, and has more color tones in the lighting

-Great at complex prompts (though still may disobey certain parts!)

-Darker blacks than v4

-Almost no bokeh (since none of my photos have this), and no plastic skin

-Not as good at light trails as v4, use special prompt list below to get it to do light trails

-Faces, fingers and text will be slightly glitchier on this one than v4, so you'll need to inpaint/use adetailer, or simply lower the strength.

-Similar faces due to small dataset size, faces are a generalized mix of about 20 different people, male and female. This lora is not intended to recreate a likeness. I have done extensive testing and none of the gens resemble anyone in particular in my dataset.

-YOU CAN USE VERY HIGH DISTILLED CFG for this one! Many of the examples go up to 10.

-This is the best epoch of this training, but since this training had so many good epochs, I've posted more epochs from this training here: (google doc link to other epochs) . If I post any on here generated with these, I will note this in the prompt!

~LIGHT TRAIL PROMPTING!~

This training did not pick up the light trails as strongly as v4, but they are there! --- you can push it by putting any of these (or all!) at the end of the prompt:

colorful ohwx, light trails in front of subject, long-exposure streaks, neon streaks, luminous ribbons, glowing filaments, kinetic light painting, electric light swirls, motion-blur lines, streaked light beams, comet-like tracers, vivid light smears, radiant arcs, laser-like ribbons, incandescent squiggles, flowing light paths, dynamic light ribbons, photographic light streaks, illuminated bands, bright trail lines, vibrant light scribbles, light streaks, motion blurred lights, neon pink light trails, neon cyan streaks, neon magenta ribbons, neon lime filaments, neon aqua swirls, neon orange light beams, neon violet smears, neon turquoise arcs, neon yellow laser ribbons, neon blue light paths, neon green dynamic ribbons, neon fuchsia streaks, neon amber tracers, neon electric-blue scribbles, ohwx colorful, high angle, overhead view, neon blue and pink light trails, swirly pastel striped beams of light, off center framing, iridescent aura, multicolor glow, prismatic halo, rainbow radiance, chromatic burst, celestial shimmer, radiant spectrum, glowing corona, ethereal light ring, vibrant luminescence, technicolor shine, pastel glow swirl, neon halo burst, divine chroma aura, luminous gradient haze, spectral illumination, opalescent shine, kaleidoscopic light rays, vivid light bloom, saturated light flare, glowing neon rings, pulsing auroras, divine flare bursts, light wave halos, synthetic aurora arcs, radiant energy loops, soft-spectrum beams, dreamlike light tendrils, ultraviolet shimmer bands, ambient light whorls, optical light bloom, angelic glow effects, spectral bands, surreal flash aura, iridescent energy bursts, light-based motion halos, divine interference patterns, digital halo crown, pastel spectrum arcs, holy-glow backlight, luminous ripple waves, celestial light fans, glowing spray arcs, neon-lit energy blooms, kinetic aura spill, synthetic chroma flare, psychedelic radiance rings, rainbow smear halos, pop-religious light effects, aura-glow perimeter, fluorescent blast rings, radiant anointing rings, heaven-light pulse bloom, retro worship glow, neon-blessed shimmer trails, lightwave burst curtains, refracted spectrum drapes, theatrical lens flares, LED glow turbulence, divine studio backlight, luminescent halo scatter

Training settings:

{
  "engine": "kohya",
  "unetLR": 0.00032,
  "clipSkip": 1,
  "loraType": "lora",
  "keepTokens": 0,
  "networkDim": 96,
  "numRepeats": 1,
  "resolution": 512,
  "lrScheduler": "cosine",
  "minSnrGamma": 2,
  "noiseOffset": 0.08,
  "targetSteps": 6500,
  "enableBucket": true,
  "networkAlpha": 128,
  "optimizerType": "Prodigy",
  "textEncoderLR": 0,
  "maxTrainEpochs": 260,
  "shuffleCaption": false,
  "trainBatchSize": 4,
  "flipAugmentation": true,
  "lrSchedulerNumCycles": 2
}

V4 (monsterbeam)

-TRIGGER WORD: ohwx (I put this in all of them, for specific trigger words see info panel)

-Suggested cfg 3.5-10 (higher has more contrast, the higher dim trainings ive been doing seem to be able to handle high cfg without getting that weird deepfried Flux look)

-Suggested scheduler/sampler: DPM2 + Beta or Huen + Beta (slow but way higher quality) or just regular Euler + Simple

-Use adetailer/inpainting on faces (they will be muddy in wide shots)

-Tested with Forge + Flux Dev fp8

This one, along with v5 are my favorites so far. After spending way too much on training and testing I finally have one i feel captures the look of my early photography as much as Flux possibly can (or as much as my skill level is right now). Epoch 183 of this run seems to retain generalization and have just enough memorization. You can only push it so far till it breaks down and becomes over-memorized and unpromptable, it seems. For V4 I used PRODIGY instead of Adamw8bit--Prodigy seems to be more capable at conceptualizing my dataset, which has a LOT of variety and wildly different styles. Every image trained had detailed captions, some with the help of an LLM, but all manually reviewed for errors/omissions.

I tried to solve the 'sameface' issue of v3 by increasing the batch size to 4 and lowering the resolution to 512px so it doesn't fixate on the faces. This seems to have somewhat worked, at the expense of muddying the faces a little in wide shots (you can use adetailer/inpainting to fix this). This issue is present still however, I don't think there's any getting around it with my 100 pic dataset. Not trained or tested on NSFW so ymmv there.

Things this model does well:

-Complex prompts

-Light beams

-Scuffing up surfaces/surface imperfections/dirty windows

-High angle shots

-Wide angle/full body shots

-Low angle shots

-Multiple subjects

-High cfg

-Glowing

-See example images for prompts that work well (the prompts will have <lora:conceptual_photography_v4_12-000183:1>" as the lora, this is the epoch and version i used, just replace this with the one here). In most cases I used an LLM to improve my prompts in the examples, flux seems to respond well to this, I would highly suggest everyone to do the same if its not doing what you want :). This one does well with lengthy prompts, in some cases the lora did BETTER than default flux at doing multiple subjects (see pink trailer/robot example)

If you are using Forge (which is where all the example images are from), just drag and drop any of the example images in the PNG info tab and hit 'sent to txt2img' to send the settings over.

Weaknesses:

-Big lora size!

-Similar faces

-Dark scenes! for the life of me i can't get it to do a dark scene like a scene at night or or a dark room where there is only one small light source.

-Mutated limbs and hands may appear every once in a while, in my testing running at strength 1 it would give a mutation for 1 out of 6 seeds or so. If this is a problem just turn down the strength--though the trade off will be that it will be less creative in the other seeds. You can also switch to Huen/Beta or DPM2/Beta--slower but seems to have less odds of glitching

-Feel free to leave feedback on any other weaknesses this may have. I'm not sure I will be training on this particular dataset again as I feel like i've tried every single setting imaginable and spent way too much time and money on it.

I may upload another version or epoch from this training or another if something really grabs me! Stay tuned!

Training settings

  "unetLR": 0.00028,
  "clipSkip": 1,
  "loraType": "lora",
  "keepTokens": 0,
  "networkDim": 96,
  "numRepeats": 1,
  "resolution": 512,
  "lrScheduler": "cosine",
  "minSnrGamma": 2,
  "noiseOffset": 0.08,
  "targetSteps": 6500,
  "enableBucket": true,
  "networkAlpha": 96,
  "optimizerType": "Prodigy",
  "textEncoderLR": 0,
  "maxTrainEpochs": 260,
  "shuffleCaption": false,
  "trainBatchSize": 4,
  "flipAugmentation": true,
  "lrSchedulerNumCycles": 2

V3 (neon darkness)-

This one goes a lot harder than V2. It is good at light trails, negative-space framing and creating unique compositions, each seed will be very different from the last, compared to default flux. It also has a mid-00s to 10's flash photography feel.

Tested on flux dev fp8 in forge ui, nunchaku, and comfyui--if you dont see "NODES" on some of the example images here its because I used a workflow that Civitai won't read for some reason--but the metadata is still in the images--just save them and drag/drop them into comfy.

-Suggested distilled cfg: 4.5-6 (you can use higher regular cfg and distilled cfg without deepfrying! Im not sure why this works), suggested regular cfg 1-2.

Scheduler/Sampler: Euler/Simple or Huen/Beta for forge/comfy

-Suggested strength–.75-1 for Forge and regular comfy workflows

NUNCHAKU USERS: Best lora strength is .5-.7, any higher and it starts breaking down. You can increase up to .9-1 if you increase the Flux Guidance up to 12-15, but your odds of glitches increases and it will lean harder into the trained images. Best scheduler sampler is DEIS/DDIM uniform . Also you still need lots of steps for it to come through at all, around 45-50. I know that ruins the spirit of nunchaku but it is what it is. You'll get a 50 step image in 5x less time, still. lol. Flux Guidance node on nunchaku can go all the way up to 13 or so without deepfrying/breaking down, from my tests.

This lora is trained on REAL photos that I took with tilted views, negative space, strange/surreal lighting and composition and multiple unique, real life subjects, etc. I used a dataset of 100 of my favorites from my photography portfolio, all manually captioned with detailed captions (see some of the terms i used below).  The people generated with this lora will have that mid-aughts to early 10’s style hair/clothing and feel–its a mix of about 15 different people–mostly female– so it will have bias there, but it is able to generate males of course. Also there’s no NSFW in the dataset and i haven't tested it in that regard so ymmv. There are a few photos with multiple subjects in them so an extra person might pop up in generations when you didn’t ask. Not sure how it plays with other loras you'll have to experiment. Its pretty much the same dataset as v2 but i added a bunch of tilted/dutch angle/overhead view shots and more with interesting compositions. I trained with a HUGE dim and alpha–128/128, thinking I could scale it down with kohya with imperceptible loss--but alas, the scaled down version loses much of the unique angles/compositions that were the whole point of this lora so making it worse just defeats the point. So I’m posting the monster lora. I am sorry for the size.

Another thing–i have almost NO bokeh in any of the trained photos, and most of them are full body shots–so this can be used at lower strengths to reduce unwanted extreme bokeh effects as well as to get more zoomed out/full body shots.

At higher strengths it is more creative, but it may create extra limbs/fingers that need fixing w/inpainting. If you get deformities, lower the strength, and/or increase the steps, and/or increase the CFG a little(seems to work well up to 5-6 without getting that oversaturated flux look, you can even increase regular CFG which gives you access to the negative prompt) If all else fails you can always inpaint :) 

Terms i put in the captions, as with all things flux these MAY work SOME percentage of the time:

  • ohwx(in all of them)

  • High angle / Dynamic angle

  • Dynamic perspective / Tilted perspective

  • Wide angle

  • Overhead view

  • Crooked framing

  • Top View / Looking down

  • Negative space in the top/bottom/left/right of the frame 

  • Conceptual photo/picture

  • Y337 (i tagged all the very YELLOW pictures with this rare token, seems to have a slight influence)

  • Tilted view

  • Negative space (on the left/right)

  • Zoomed out

  • Exaggerated

  • Off-center/Asymmetrical

  • Satellite view

  • Dutch angle

  • Low angle

  • Worms eye view

  • Dramatic low angle

  • Top-down

  • Horizontal striped beams of light 

  • Neon light trails

  • Decaying film stock / Aged film stock

  • Extreme closeup

  • Mirrored composition

  • Surreal

  • Colorful ohwx (the very colorful ones)

This is a work in progress, download at your own risk!

TRAINING INFO:

100 images, detailed captions

Epoch: 141 out of 200 total (141 is clearly the best of the bunch, perfect mix of generalization vs learned traits)

dim/alpha–128/128

Steps: 7340 

LR .00015

Optimizer/LrScheduler: adamw8bit/cosine

Num Repeats: 1

Noise offset .05

MinSnrGamma: 0 (was told this doesnt matter for flux)

Enable Bucket: true (almost none were cropped to square)

Flip Augmentation: true

Train batch size: 2

Resolution: 1024

V2 (angular)

This is the 'lightest' of the 3 and has fewer images trained(80). V2 is trained on 80 of my favorites from my old photography portfolio--- they have a distinct 2000s feel/style, think Myspace era emo kid meets David Lynch. Lots of surreal, conceptual and straight up weird photography.

Trained on civitai's trainer on Flux Dev. Flux can only go so far mimicking the style in my trainings so far, but I will be doing another try on training with a few more to see if it can pick up the style better, so stay tuned!

I know the file sizes are large...I meant this for personal use and large dimension loras seem to work best for me training this style--so hence the large size.. If you dont like big files then steer clear! I did not train or test this on NSFW so ymmv there. Most of the photos in the trained dataset were of females, since that is most of my portfolio back then.

I prefer to use Huen/Beta with high steps, 35 or more but you can definitely use the typical 20 steps, Euler/Beta, as with all generations it won't be as good.

See comparisons in the gallery!