Sign In

Recreating an Artist Style - What goes wrong - it'll really go wrong.

Recreating an Artist Style - What goes wrong - it'll really go wrong.


This is just a short quick note about what's been going on trying to make VERY clear loras for artist styles and finding out they're not working. Do i have a fix? For some, no, and i'll explain why in the next parts.

Comics VS Manga

We grew up on comics, we didn't get into Anime until closer to our teenager years and even then it was whatever was available at the comic shops. Anime was dubbed, put on local TV and strangely chopped, censored and edited for american audiences.

Manga and Anime and even their video game counterparts - are FAR easier to train on depending on the style.

Clearly LORAS are about a concise STYLE when trying to use an artist, or a series.

Manga especially in the likes of the Inomata Style, Yuu Watase or even someone maybe doing a Naoko Takeuchi one - it's FAR easier to train those even on SD 1.5 or AnyLora based on maybe even biases of the way Stable Diffusion runs.

The plethora of asian media in the internet in the last ten years makes trying to make a comic book Lora sometimes that more difficult.

How do you define success in training a style?

We don't have a defined answer on this.

What we do know is that if it looks good, looks CLOSE ENOUGH to it - and has traits that don't fall apart when you're testing it - then it's good.

JoeMad style lora falls a little depending on the model, but that could just be the dataset - it overall was A OK in testing - and has pleased people.

Our Comics V1 and V2 ones? weren't direct "STYLES" as mashups of midjourney and other comics data.

They were nods to things we enjoy - and never to recreate a 1-1 style.

Data and Artists

Artists that MAY go wrong depending on the data

We're afraid to start the Byrne style Lora for the same reason we tried Salvador Larocca's style twice and it failed.

If an artists style is NOT STRONG enough or it's hard to find enough GOOD upscaled data (and i'm not even talking about editing here) - then it's best you just make a mashup style lora.

Larocca's Lora is still in our HF repo, but no matter what we did - no matter what MODEL or strength - even on SeaArt and A1111 - it just produced off putting faces, poor anatomy and worse: Artifacts, text and more.

Trying to find textless covers - trying to EDIT data can take far more time than making a darn good lora with Anime content can.

So what does the test outputs look like?

Not at all what i'd expect for a Lora trained on his style, but keeping in mind that Larocca's style changes ever so slightly depending on the decade, the TITLE and company he's working for. He's clearly worked for others besides Marvel, but he's largely worked for marvel under the X-men and Avengers titles. He's not like Rob Liefield or Joe Mad who's style is SUPER STRONG! He was known for a softer, very illustrative style - and sometimes got mocked for quasi tracing in a time where DEVILISH deadlines were pressed on artists (and these days they're only getting worse).

The problem becomes: When you're using it on ANY model and it just looks like the top 10 comic models on Civit, then your lora isn't worth much. AT least FOR ME, there maybe is a little bit of a style - but when testing it on A111 i never even kept the outputs (or can't find them at this stage) - because they were worse than the above and below.

Keep in mind: the first version was a base 1.5 train, and the second one (neither are up on civit) was on AnyLora.

While it's colors are CLEAR sometimes, again the issue is that trying to train across 23+ years of data means different things.

We'd opted to add both the 2000s X-treme x-men, with LIQUID! on colors, but we also opted to add in star wars data with him on pencils. Yet, nothing's consistent enough even with multiple years of covers and pencil data.

Basically to say: On this on AN ANIME or comic model it just adds a tiny extra flare if you're doing basic prompts.

It could be that we didn't tag it properly, but we used the usual notebook - it's not Holo's fault we have poor data.

How to fix, and what's up with no Byrne?

We'll TRY doing Byrne, but his style is older, and may not play ball as well with how deep and rich a lot of newer models are - and the fact that some people will whine and go "but he's in the data'. Well by now, trying to get SD 1.5 to even try to do Joe Mad or Andy Kubert is like asking it what an apple is: it isn't quite sure so it just guesses.

Layman's people science for LORAS is this: Shove it in a folder, tag it - train it, test it.

It doesn't work? Rinse and repeat, or cry in a corner.

Aight, well - you've whined enough now what?

Easy, lol - just patience, we're still slowed a little - and we have a back log a mile high from our own content and new requests.


If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US:

Lora Request Form:


Listen to the music that we've made that goes with our art:

We stream a lot of our testing on twitch:

Pre-Release Models, Lora Backups - Send us pizza!