Type | |
Stats | 571 510 |
Reviews | (74) |
Published | Dec 28, 2023 |
Base Model | |
Usage Tips | Clip Skip: 2 |
Hash | AutoV2 73377B118A |
Want to make anime/mange style images but aren't a master prompt sculptor? AnimEasy's been tweaked to try and still generate images with decent colouring and lineart no matter what the prompt (even no prompt at all). See the showreel above for some examples of what you can get with just a single word.
For best results use:
Sampler: DPM++ 2M Karras (they're all pretty similar - if your images come out garbled, either ditch the LoRA or try "DPM Adaptive", it's more robust but slow)
Steps: start with 20 (less than 10 gets kinda fuzzy, more than 30 is often a waste)
Clip skip: 2 (1 can sometimes work better for rare and unusual prompts)
CFG scale: 7 for most things, 11 to follow the prompt really closely and 3 or less for rare or unusual stuff
LoRAs - if you start seeing speckles/spots/inkblots, turn down the LoRA strength
VAE - There's already one baked into the model, although you can use your own
To upscale images use:
Sampler: R-ESRGAN 4x+ Anime6B (Latent doesn't seem to like anime faces as much)
Denoise strength: 0.5 (higher changes the scene more but may add more details)
Recommended (but not required) plugins
ADetailer - fixes up faces, use face_yolov8 models for best effect
ControlNet - for when you want to use a reference image
Set the preprocessor to: lineart_anime (2d) or lineart_realistic (3d or real life)
Set the model to: control_v11p_sd15s2_lineart_anime
Start with a control weight of 0.5, higher = output matches input more closely
Set control mode to: ControlNet is more important (follows input more closely)
As a challenge, try a one word (or even no word) prompt with your favorite model, I find it can tell you a lot about how robust and generalist a model is and what kinds of images were present in the model's training data.
Below are some examples (50 steps, hi-res on, prompt in caption)
(ok, ok, I'll stop with the memes now)
This model is primarily built from BetterThanNothing's "Toonify" and BigBeanBoiler's "Flat-2D-Animerge" with a touch CyberAlchemist's "Anime Lineart" and "Thicker Lines" LoRas. The main motivation was to create a model that could still produce good images with really limited prompts. About 30 of the most popular anime/mange models on Civitai were tested against around 300 single-word prompts and scored one the quality of their images.
Across the 8500 total images, Toonify and Animerge both consistently scored the highest. Many of the other models either produced a lot of garbled gibberish or just spat out blatant real-world stock photos... This test wasn't about which models can produce good images (they all do that), it was about which models would still produce good images with extremely vague and limited prompts.
Animerge on its own produced a decent variety of images but sometimes struggled with some of the prompts I tested, occasionally generating fuzzy, noisy images. Toonify had the most consistent linework of any model but had a habit of trying to stick little miss redhead into pretty much every image (often without her clothes on!).
AnimEasy is intended to be the middle ground between the two, good colouring and linework pretty much all the time, a wide and varied set of generated images and a reduction of the number of surprise NSFW outputs (you'll still get them if you deliberately ask for them, it just won't throw them up out of the blue as often)