This article triple as a shoutout, some insights and a breakdown tutorial for a picture 😉
Fine art bounty
The first thing is a shoutout and call for my followers to check this bounty: https://civitai.com/bounties/9257/fine-art-bounty-a-canvas-of-meaning
I know i have quite a few VERY talented creators among my followers and i think this bounty i right in their alley 🥰
My own entry (part of which is the cover image) and how i made it (and what it means) is the two other goals of this article.
My current status
Between health issues and some difficulties balancing my work/life stuff with my family, i had used doing whatever i felt like here as a way to numb myself. I need to get back to working on those issues, so, i'll be a bit less ALL THE TIME on the website. Three weeks in Japan away from my computers helped even if in the end, i still made and released UnNamedIXL V3 during this trip 😅
I work remote full time and start feeling i only exists on the internet. My bosses are all asking for more and more "stupid" stuff i feel (as an older guy that most people in my company) as counter-productive. This has lead me to start lashing out too much, thinking too much and breaking apart...
And i needed to get this out of my system and thought: why not as an entry for the forementionned bounty?
Bounty entry: how and why
I have a prompt ready made to make a character that kinda look like me IRL (it only is a bit more muscular ha ha).
So, i used it to generate a first ControlNet picture:
1boy,
(long hair, ponytail, black hair, grey beard, facial hair, rectangular glasses, black hair, black eyes, 30 years old, white shirt, grey waistcoat, purple tie), grey pants,
upper body, feet out of frame, three quarter view, looking at viewer,
simple background, grey background
Negative prompt: IllusN3g
Steps: 60, Sampler: DPM++ 2M SDE Heun, Schedule type: SGM Uniform, CFG scale: 5.5, Seed: 100666001, Size: 768x1344, Model hash: 6c2c4a9bba, Model: AnBan_ShinV1, Clip skip: 2, RNG: NV, ADetailer model: yolov11m-face.pt, ADetailer confidence: 0.6, ADetailer method to decide top k masks: Area, ADetailer mask only top k: 1, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.2, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer use separate steps: True, ADetailer steps: 60, ADetailer use separate VAE: True, ADetailer VAE: None, ADetailer version: 25.3.0, TI: "IllusN3g, IllusN3g", Discard penultimate sigma: True, Version: f2.0.1v1.10.1-previous-665-gae278f79, Diffusion in Low Bits: bnb-nf4 (fp16 LoRA), Module 1: sdxl_vae_fp16_fix, Hashes: {"embed:IllusN3g": "dbecff3f60", "model": "6c2c4a9bba"}, Source Identifier: Stable Diffusion web UI
Using this picture, i wanted to get in two concepts:
My "virtual life"
My sense of overwhelm
To do so, i used two separate LoRA and this first picture as a ControlNet with the classic CN-anytest_v4-marged.safetensors
<lora:Low_poly_IL:1> low poly art, geometric shapes, low poly face, low poly hair, geometric pattern, subtle shading, flat color, detailed geometric polygonal texture, detailed polygonal patterns on skin,
1boy,
(long hair, ponytail, black hair, grey beard, facial hair, rectangular glasses, black hair, black eyes, 30 years old, white shirt, grey waistcoat, purple tie), grey pants,
upper body, feet out of frame, three quarter view, looking at viewer,
simple background, grey background
Negative prompt: IllusN3g
Steps: 60, Sampler: DPM++ 2M SDE Heun, Schedule type: SGM Uniform, CFG scale: 5.5, Seed: 100666001, Size: 768x1344, Model hash: 6c2c4a9bba, Model: AnBan_ShinV1, Clip skip: 2, RNG: NV, ControlNet 0: "Module: None, Model: CN-anytest_v4-marged [4bb64990], Weight: 0.5, Resize Mode: Just Resize, Processor Res: 512, Threshold A: 0.5, Threshold B: 0.5, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: False, Control Mode: Balanced, Hr Option: Both", ADetailer model: yolov11m-face.pt, ADetailer confidence: 0.6, ADetailer method to decide top k masks: Area, ADetailer mask only top k: 1, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.2, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer use separate steps: True, ADetailer steps: 60, ADetailer use separate VAE: True, ADetailer VAE: None, ADetailer version: 25.3.0, Lora hashes: "Low_poly_IL: 675516355794", TI: "IllusN3g, IllusN3g", Discard penultimate sigma: True, Version: f2.0.1v1.10.1-previous-665-gae278f79, Diffusion in Low Bits: bnb-nf4 (fp16 LoRA), Module 1: sdxl_vae_fp16_fix, Hashes: {"embed:IllusN3g": "dbecff3f60", "model": "6c2c4a9bba"}, Source Identifier: Stable Diffusion web UI
psyhe1d, surreal vector, fluid abstract facial distortion, cosmic dreamcore, ethereal surrealism, smoke, oval, <lora:psyhe1d:1>
1boy,
(white shirt, grey waistcoat, purple tie), grey pants,
upper body, feet out of frame, three quarter view
simple background, grey background
Negative prompt: IllusN3g
Steps: 60, Sampler: DPM++ 2M SDE Heun, Schedule type: SGM Uniform, CFG scale: 5.5, Seed: 100666001, Size: 768x1344, Model hash: 6c2c4a9bba, Model: AnBan_ShinV1, Clip skip: 2, RNG: NV, ControlNet 0: "Module: None, Model: CN-anytest_v4-marged [4bb64990], Weight: 0.2, Resize Mode: Just Resize, Processor Res: 512, Threshold A: 0.5, Threshold B: 0.5, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: False, Control Mode: Balanced, Hr Option: Both", ADetailer model: yolov11m-face.pt, ADetailer confidence: 0.6, ADetailer method to decide top k masks: Area, ADetailer mask only top k: 1, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.2, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer use separate steps: True, ADetailer steps: 60, ADetailer use separate VAE: True, ADetailer VAE: None, ADetailer version: 25.3.0, Lora hashes: "psyhe1d: 9f4fcb27f7dc", TI: "IllusN3g, IllusN3g", Discard penultimate sigma: True, Version: f2.0.1v1.10.1-previous-665-gae278f79, Diffusion in Low Bits: bnb-nf4 (fp16 LoRA), Module 1: sdxl_vae_fp16_fix, Hashes: {"embed:IllusN3g": "dbecff3f60", "model": "6c2c4a9bba"}, Source Identifier: Stable Diffusion web UI
Time to hit GIMP. Using those images, a lot of eraser stuff and some transform tool, i made this second ControlNet picture:


I kept a bit of stuff on my face (kinda like Ziggy Stardust) not yet knowing what to do with it.
I fed this image in ChatGPT asking for an SDXL prompt describing the picture, using Danbooru tags and around 300 tokens and got this:
1boy, solo, upper body, facing viewer, serious expression, confident, glasses, rectangular glasses, black hair, long hair, hair over shoulder, ponytail, loose strands, grey streaks in hair, beard, stubble, muscular build, rolled-up sleeves, white dress shirt, grey waistcoat, buttoned vest, slim fit vest, grey trousers, hands in pockets, purple necktie, tie clip, folded pocket square, formal attire, businesswear, smart casual, realistic shading, semi-realistic, detailed face, sharp jawline, intense gaze, furrowed brows, slightly tilted head, light skin, mature male, dignified, sophisticated, professional, standing pose, upper arm muscles visible through shirt, form-fitting clothes, layered clothing, purple and white color palette accents, galaxy-like energy, magical aura, swirling smoke, smoke trail, purple glow, black and purple gradient energy, cosmic effect, ethereal effect, hair blending with magical smoke, energy emerging from head, fantasy element, surreal, abstract background, textured background, monochrome background, dynamic lighting, dramatic shadows, high contrast, rim lighting on hair, realistic fabric folds, subtle wrinkles on shirt, soft light on face, artistic rendering, polygonal shading effect, stylized art, crisp lineart, high detail, ultra-detailed, cinematic composition, mid-shot, portrait orientation, centered composition, dynamic focusUsing the picture as ControlNet and this prompt, i did a new txt2img and got this new picture:

This was not exactly what i was looking for but still, gave interesting effects i added still using GIMP in my ControlNet picture which now looked like this:

BTW, the weird background was coming for a test picture i did:
futscribe, ftsc, futuristic city skyline, flat color, limited palette, colorful <lora:futscribe:0.5> <lora:Colored_Lineart_on_Black_-_Illustrious:0.7>
Negative prompt: IllusN3g
Steps: 60, Sampler: Euler a, Schedule type: Beta, CFG scale: 3, Seed: 100666001, Size: 1344x768, Model hash: 6c2c4a9bba, Model: AnBan_ShinV1, Clip skip: 2, RNG: NV, ADetailer model: yolov11m-face.pt, ADetailer confidence: 0.6, ADetailer method to decide top k masks: Area, ADetailer mask only top k: 1, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.2, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer use separate steps: True, ADetailer steps: 60, ADetailer use separate VAE: True, ADetailer VAE: None, ADetailer version: 25.3.0, Detail Daemon: "mode:uncond,amount:0.2,st:0.2,ed:0.8,bias:0.5,exp:1,st_offset:0,ed_offset:0,fade:0,smooth:1", Lora hashes: "futscribe: e8f74a89305d, Colored_Lineart_on_Black_-_Illustrious: 8ebc6b80690a", TI: "IllusN3g, IllusN3g", freeu_enabled: True, freeu_b1: 1.01, freeu_b2: 1.02, freeu_s1: 0.99, freeu_s2: 0.95, freeu_start: 0, freeu_end: 1, Discard penultimate sigma: True, Beta schedule alpha: 0.6, Beta schedule beta: 0.6, Version: f2.0.1v1.10.1-previous-665-gae278f79, Diffusion in Low Bits: bnb-nf4 (fp16 LoRA), Module 1: sdxl_vae_fp16_fix, Hashes: {"embed:IllusN3g": "dbecff3f60", "lora:Colored_Lineart_on_Black_-_Illustrious": "f1a3c21e02", "model": "6c2c4a9bba"}, Source Identifier: Stable Diffusion web UI
It was time to hit img2img. By altering the initial prompt adding the LoRA activation token and inpaint, i was able to get the effects smoothed out. In the meantime, i decided what to do about "my" face and went for a kintsugi effect thanks to an other LoRA:

After a while and some more GIMP, i got this picture:

I did a new round of ChatGPT for a final prompt, ran it one last time in img2img (with the LoRA added at 0 to get them in the metadata) and here is the final result!

All pictures and associated models will be uploaded and linked here asap and i'll submit my entry with this article as "how did i made it".
Thanks for reading! 🥰
Update: complete list of models used
Checkpoint:
AnBan Shin V1: https://civitai.com/models/1058624?modelVersionId=1844765
LoRAs:
Polygonized: https://civitai.com/models/1337621
Kintsugi: https://civitai.com/models/1306929
Colored Lineart on Black: https://civitai.com/models/1202481
Futscribe: https://civitai.com/models/294229/futscribe
ControlNet:
Bonus: Why purple?
I just love the color 💜 Both my "at home" and "travel" fountain pen are filled with purple ink. I own two purple ties, 3 purple bags and even my shoelaces are purple. Purple is an ambiguous color, neither red of passion or blue of peace. Here, having purple flowing out of the character symbolize both aspect leaking out (and start missing from my own sense of self)

