I myself am the owner of an RTX 3060, I do everything on it. I couldn’t find any good guides on the Internet on how to set everything up for video cards below 4090, so I decided to write my own. When generated, it takes up 7.7 GB of dedicated GPU memory, 22.9 RAM and almost all of the general memory, so I tried not to touch the computer while the video was being made. 16 frames and 8 fps takes about an hour if the image is 512x768, checkpoint, VAE and upscaler. If you don’t have enough memory, then I advise you to follow the guide exactly, because it fixed ALL my mistakes when creating the video. I wrote the article because people limit themselves only to creating pictures, and no one could help me with my problem, so I had to Google everything myself. This article is a source of knowledge from several sites that helped me.
Let's start by setting up webui-user.bat
You need to open the file as text (to do this, you can change the extension from .bat to .txt), delete everything and paste this text:
@echo off
git pull
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers --upcast-sampling --lowvram
set CUDA_MODULE_LOADING=LAZY
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:32
call webui.bat
I’ll briefly tell you what it does for what.
--xformers - allows you to reduce memory consumption and increase speed
--lowvram - Further reduces GPU memory usage, but greatly reduces generation speed. The GPU memory stores only what is needed for generation at a small step; between steps, the data is transferred to RAM. Not recommended for use unless absolutely necessary.
--upcast-sampling - Fixes the sometimes occurring error “Upcast cross attention layer to float32” when using xformers and increases the generation speed in general
The penultimate line is responsible for clearing garbage during generation, which helps save memory for generation.
If you have more than 6GB VRAM, after you finish generating the video, you can change --lowvram to --medvram
Change the file extension back to .bat and run it.
Installing Animate Diff
Go to the Extensions tab, then go to the Available subsection and click on the Load From button. This will open up a list of VERY many extensions that we can install for our neuron.
Enter "AnimateDiff" into the search bar and install what is shown in the screenshot
After installation, you need to go to the adjacent “Installed” folder, click “Apply and restart UI”. When the UI reboots, we completely restart our neuron. Before we run it again, we need to download the models for this plugin. You can download it here or something of your own. You can also download models for beautiful camera movement, but I didn’t find anything interesting for myself, so I don’t care.
Having installed the model (I use mm_sd_v15_v2.ckpt), we transfer it along this path - StableDiffusion Direcotry\extensions\sd-webui-animatediff\model and launch the neuron.
If you have installed everything successfully, then under the basic settings for generation you will have a new section "AnimateDiff". It's not that difficult to figure out, since there are very few settings.
Let's look at just some of the settings, since you're clearly not reading this guide because you're a neural network genius, and I'm too lazy :gigachad:
Motion Module is our model for character movements. In general, it is recommended to use v14, since it is more stable, however, the number 15 is larger, which means it is cooler
Save Format - in what format the generation result will be saved. I advise you to install webp/png, since the rest either do not save normally or simply do not play. When saving in image format, think a hundred times, because all your frames will be saved separately and you will have to try to put them together yourself
Number of frames - the total number of frames in your video, I advise you to set it to a multiple of two. FPS - number of frames per second. 16 frames at 8 fps = 2 seconds of video.
Closed loop - setting up looping of your video. R - P - the video will NOT loop, R + P - the video WILL loop, A - makes the last frame as similar to the first as possible.
Using R+P is better if your total number of frames is greater than or equal to Context batch size * 2. In other words, if Context batch size is 16, then the total number of frames should be 32.
You do everything for yourself. Leave the rest as it was. The more frames you want to take, the longer the video will take to create.
The result will be saved in this directory - StableDiffusion\outputs\txt2img-images\AnimateDiff
Fill in the fields of the Prompt, if you have not already done so, check the "Enable AnimateDiff" box and click the "Generate" button.
Please note that AnimateDiff conflicts with Adetailer, so I recommend removing it while generating the video.
How to increase video length?
If you've looped the video, this shouldn't be a problem, although zooming in manually isn't a good idea either.
You can use the Deforum plugin for this. Installation is exactly the same as AnimateDiff
After installation, you again need to restart webui-user.bat, only then you will be able to use this extension.
If you still want to use it, go to the AnimateDiff section and change the Frame Interpolation parameter there from "Off" to "FILM", and in the Interp X section enter "10"
This will slow down the video by 10 times, i.e. 30 frames at 30 fps will last 10 seconds.
How to make a video in which the character will change?
To do this you need to use "PromtTravel". It is installed in the same way as other add-ons.
After installation, restart webui-user.bat again.
How to use it?
Usage is as simple as possible. In the positive input field, enter this:
frame number1: where_it_starts
frame number 16: what_will_turn into
For example:
1: 1girl, flat_chest
8: 1girl, medium_breasts
16: 1girl, big_breasts
In this example, in the first frame there will be a girl with a flat chest, in the middle of the video the breasts will grow to medium, and at the end they will become huge
Optimization and problem solving
In addition to what I indicated at the beginning of the article, there are several more ways to optimize the generation of images, however, you will have to sacrifice the generation speed. Go to settings (1), then go to the “Optimization” subsection (2).
Uncheck Bath cond/uncond (3) (If you have little VRAM. I didn’t have enough 8GB, so I had to take such measures). This option increases speed by using the GPU.
Next, enable Pad promt/negative promt to be same length.
Let's move on to point 5, Cross attention optiimization. If your video card is 1xxx, choose xformers; if it is 2xxx or newer, enable sdp. If you don’t have a goal to save memory, then sdp-no-mem.
We return to the main menu. We are looking for the Bath Size section there. It increases speed by generating several images at the same time. I recommend setting the maximum value. The Nvidia 536.40 drivers released in the summer allow you to use shared GPU memory to work with the GPU. Whereas previously the model had to fit entirely into the GPU's Dedicated Memory, now it can also occupy the GPU's Shared Memory (which is the computer's RAM).
Disabling hardware acceleration in the browser
The browser uses the GPU for faster page rendering and beautiful animations. We do not need this while working with the neural network, so it is recommended to disable this option. Disabled in browser settings.
Other
Be sure to close all applications that may use the GPU, such as Telegram, Steam, Discord, and so on.
Do not generate images larger than 512x768, this may lead to errors, no matter how much VRAM you have.
Do not write huge proclamations, including negative ones. It is better to replace them with compressed ones into one file
Graphics settings menu
Go to settings, find the "Graphics settings" section there and disable hardware acceleration. It improves gaming performance, but can be a hindrance when working with neuron. Disabling this option will help you significantly increase the speed of video generation.