(Just look at this picture, I'm so proud of my art skills 🥰)
Before we dive into all the behind-the-scenes mess and lessons from making my first comic… if you haven’t actually seen it yet, here’s the link: https://civitai.com/articles/25869
Go take a quick look (it’s short!), then come back — it’ll make the workflow stuff hit way better. 😄
In this article, I want to share what I’ve learned while creating my first comic. I also want to ask the community to check my ideas and think together about how we can make things better and faster. I’ll try to keep this useful for you and focus more on the process rather than just my personal story. 😊
You can use the table of contents to find the specific information you need.
Plot
Let’s start with how I used chatbots for brainstorming the plot and reviewing the text.
I tried ChatGPT, Claude, Gemini, and Grok. Since my comic has a comedy style, I found that only Grok really understood the jokes and was able to help generate new ones.
Here is how I used Grok:
Learning the Basics: First, I asked for best practices on comic creation, like the right pace, how many frames to put on a page, and how many pages should be in a chapter.
Drafting: Next, we brainstormed the pages and how to split the story. I got a good draft here, though it changed quite a bit once I actually started creating.
Panel Brainstorming: Then we went through the panels page by page. I gave Grok my vision, and he suggested different ideas. Grok tends to be very detailed—maybe too detailed for what I actually built—but after reading some manga, I realized he was right and I probably need to slow down the pace and use more frames.
Text Review: Finally, I showed him my almost-ready panels, and Grok improved the language. Even if you are good at making jokes, Grok can help make the wording much funnier.
Lora for Characters
This is where I struggled, and I really need your help! 🆘
I’ll share what I know so far. Big thanks to MisterMR for the articles and whity00 for the tips in my DMs, and of course Grok for the research.
I used the WAI-illustrious-SDXL v14 model. One mistake I made was training the Lora on the base Illustrious model. While this makes it work with more models, it’s usually better to train it on the exact model you plan to use for generation.
There are two ways I know to do this:
The Iterative Way: Create a low-quality Lora from just one image. Use that to generate more images, pick the best ones, and use them to train a better Lora. You will usually need to fix some details manually. Grok also suggested using ControlNet for specific angles, but I’m not sure how well that works yet. 🤔
Using Image Edit Models: You can use editing models to make more images for your training set.
License considerations: Be careful with licenses! Main cloud models like Nano Banano or OpenAI prohibit "developing machine learning models that compete” with them. Both Grok and Gemini say it includes Loras. I think these rules are mostly to stop competitors, so they probably won't sue you, but I wanted to warn you anyway. ⚠️
Open weights models like Flux Kontext or Flux Klein 9b have similar issues. However, models like Qwen Edit or Flux Klein 4b use the Apache 2.0 license, so they should be safe to use.
If anyone wants to help me with Loras, I’m happy to pay "yellow buzz" for the training and the work! 💰
The First Page (Throne and Harem)
This page took a long time because of all the inpainting, but the steps are simple. I generated the first image on Civitai using the Group Hug Lora.

Then I upscaled it in two steps. The first upscale used a high denoise (around 0.7) and the second was much lower (around 0.2).
After that, I did a lot of inpainting—over 1,000 generations! 🤯
Lessons learned:
Start Naked: I recommend generating characters without clothes first, just like traditional artists do. 🌶️ It is much easier to paint a dress over a body than it is to try and turn a green hoodie into a blue yukata. Plus, bulky clothes can leave trash in the background.
Work Back to Front: Start inpainting objects in the far background first. This helps you hide messy parts with objects in the foreground later. This is actually how the redhead catgirl in the green dress appeared—I used her to hide some weird background errors! 🐱
Denoise Control: I kept my inpainting denoise around 0.3. This keeps things consistent, but it means the model can't make big changes (like changing colors). You have to "sketch" what you want first. To prove it, look at my amazing drawing skills 😅:

Final Polish: As a last step, I ran a low denoise (0.05) img2img. This fixed small shadows and color issues. Just be careful, as it can change tiny details like eye color, which I had to fix manually in Krita using layers. ✨
Here is a short video showing the progress:
Backgrounds
Keeping backgrounds consistent is one of the hardest parts of making a comic. I want my work to look professional, so I put a lot of effort here. 🏘️
For the first chapter, I made a static background for the bedroom and the UnstableAI data center. For other scenes, I decided to keep the pace fast, so it is fine if the background changes every frame.
Here is my process for backgrounds:
1. Generate a base image
I started with this picture:

Gemini was very bad at generating the room out of this. Surprisingly, the problem was that it didn’t want to use the right view angle.

I ended up using img2img on Civitai with the WAI-illustrious-SDXL v14 model. The angle was perfect, but there were still artifacts.

2. Fixing "trash"
Every model creates weird artifacts you don't want. You have two ways to fix this:
Standard Editing: In Krita, the “Smart Patch Tool” is great for replacing areas automatically. But it doesn’t work every time, so I had to master how to use the eraser, gradient, and blur tools.
AI Editing Models: - Flux Klein 4b and Qwen Edit are free to use even for commercial purposes. However, the quality is not always great. Qwen Edit changes the style too much, and Flux Klein 4b is not very "smart." I usually ran the same prompt many times and picked the best one.
You can also use Gemini Nano Banano or ChatGPT, but they have limits and can be slower than I wanted.
Here is the result after cleaning it up:

3. Fixing Scale
I didn't notice the scale was wrong until I tried to put my character on the bed. The bed was smaller than the desk, so my character looked like a child!
AI models really struggle with scale. Even powerful ones like Nano Banano Pro couldn't fix it. I had to go back to Krita and manually resize things—making the bed bigger and the monitor smaller.
It looked ugly at first:

Then I used img2img with 0.3 denoise to smooth everything out. Here is the final result:

4. Adding more objects (optional)
Adding extra items makes the world feel alive. There are three options for this:
Inpainting with Image Edit models: This can be the simplest solution. I used it to add fire to the machine.
Generate and Insert: You can generate items separately, then add them manually in Krita. This is how I added prints and action figures. I generated them on Civitai, cropped them, and placed them in the scene. You have to handle shadows and perspective yourself, but it works well!
Inpainting with generation models: This is similar to how I added the posters on the wall.
Here is the result:

Composition
There are several ways to combine everything together for each panel:
1. Simple one
If you have only one character on the panel and you just want to show their emotion, you can generate the whole scene at once. I learned after chapter 1 that even professional comics and manga don’t always use backgrounds for reaction shots. Sometimes a character on a white background is enough!

2. Background + Inpaint
If you already have a background, you can just inpaint the character into it. This works very well if you have a Lora for that character.

3. Background + manual insert + inpaint
It is easy to generate a pose you like, but very hard to get the exact pose you want. One way to get the exact pose is ControlNet, but I am still learning how to use it. I found those articles, but haven’t tested it yet:
https://civitai.com/articles/19231/my-process-of-creating-ai-generated-images
PoseMyArt + img2img + inpaint
I used the PoseMyArt website to set up the exact composition and perspective I wanted. I tried to use img2img on it, but that failed. However, I still used the result to get the background in the correct perspective.



Gemini + inpaint
By the end of the chapter, I used Gemini to generate the base, made some manual changes, and then finished with inpainting. I think this is clear from the video, but if not, please leave a comment and I will explain!
Generative AI for Krita extension
It is worth mentioning the "Generative AI for Krita" extension (GitHub link). It provides a simple interface for generation and inpainting. It supports all modern models. I used it on the first pages to remove artifacts, but I eventually stopped using it. ComfyUI has more flexibility and allowed me to work faster. But for some people, this extension might be more convenient.
Boring stuff (Krita)
I used Krita to put everything together into finished pages.
Panels
Making panels is easier than you think—just use a manga template!
File -> New… -> Comic Templates -> Manga template

Then, draw your panels in a mask layer. I used to think there were strict rules for this, but it seems like everyone just places them how they like. So, just do what feels right! Once you have the mask, everything you insert will be cut to fit the panels. Don't forget to use these buttons:

Speech bubbles
I researched many tools for making manga, especially for speech bubbles. It turns out Krita has a very simple way to do this already!
1. In Krita 5.2.2, activate the Symbol Libraries: Settings -> Dockers -> Symbol Libraries.
2. You will see a large library of balloons for speech, thoughts, and yelling.

3. Choose the vector layer where you added the bubble. At the top, click Layer -> Layer style, choose “stroke,” and set it up however you like.

Fonts
Even though modern models support adding text to images, the simplest solution is still adding it manually in Krita.
You will need a font you will use across all chapters.
https://www.dafont.com/theme.php?cat=102&page=6&l[]=1
Here you can find fonts that you can use for free, even for commercial purposes.
Just don’t forget to put the “100% Free” checkbox.

Let's talk about it
I really want to hear what you think about this workflow. Does it make sense to you, or are there better ways to do this?
I would love your advice on a few things. Do you know a faster way to handle backgrounds or keep characters consistent? If you have any good articles or guides that could help me improve my speed or quality, please share them.
I am still looking for someone to help me with training and working with Loras. I am ready to pay in yellow buzz for both the training and the production work. Let's build something cool together!


