Sign In

Control the Camera: Open Source Video Tips + Tools (Creative Community Stream)

13
Control the Camera: Open Source Video Tips + Tools (Creative Community Stream)

Intro

This week, I’m teaming up with Jacob over at ThinkDiffusion for the Creative Community Stream this Friday at 3PM PST to give you an introductory rundown of tools and techniques to help with producing videos in Wan and Hunyuan.

We’re going to cover how you can get the best results out of these models using both platforms – whether through prompting and LoRA magic on Civitai or through ThinkDiffusion's new Wan & Hunyuan Starter Kit, a collection of easy to use and highly customizable prebuilt workflows for image-to-video, video-to-video, start and end frame video, and more.

If you’re just getting started, or you have some experience and want to take video control to the next level, or maybe there’s a certain contest you were thinking of entering and want to get the best results, you won’t want to miss it.

Recording:

Top Tips

For Hunyuan / Wan 2.1 prompting, the top tip (asides checking the Education Hub prompting resource) is to use an LLM like ChatGPT, DeepSeek or Claude 3.7 to help transform your text ideas or images into a video prompt.

All three listed LLMs are already trained to help create video prompts if you ask something like

Please turn the attached image into a Hunyuan video prompt - focusing on [action]

or

Please make a Wan 2.1 t2v prompt for a scene where [characters / actions / camera movement] happens

You can also prompt it as such to tell the LLM to provide you with proper prompting, regardless of which model:

You are an assistant that writes short, motion-focused prompts for animating images.

When the user sends an image, respond with a single, concise prompt describing visual motion (such as human activity, moving objects, or camera movements). Focus only on how the scene could come alive and become dynamic using brief phrases.

Larger and more dynamic motions (like dancing, jumping, running, etc.) are preferred over smaller or more subtle ones (like standing still, sitting, etc.).

Describe subject, then motion, then other things. For example: "The girl dances gracefully, with clear movements, full of charm."

If there is something that can dance (like a man, girl, robot, etc.), then prefer to describe it as dancing.

Stay in a loop: one image in, one motion prompt out. Do not explain, ask questions, or generate multiple options.

Check out the Civitai ChatGPT FramePack Motion Prompter which is already set up with this script, and works well with Hunyuan/Wan 2.1 prompting.

About ThinkDiffusion

ThinkDiffusion is your Open Source Gen AI Workspace in the cloud: offering private & dedicated turbocharged machine instances, fully-loaded with Automatic1111, ComfyUI, Kohya, and other cutting-edge open-source apps. Simply pay the hourly rate for launching a machine and use any models & extensions with no restrictions!

ThinkDiffusion was one of Project Odyssey Season 2’s Gold Sponsors, and has been a huge proponent for open source gen AI video production.

🎬 Lights, Camera, AI-ction Contest 🎥

As part of our official launch of Hunyuan and Wan 2.1 here on Civitai, we’re kicking things off with the action-packed Lights, Camera, AI-ction Contest!

With prizes of up to 100,000 buzz available and video generations starting at only 100 buzz, there's never been a better time to get started with AI video generation! 🎞️

13

Comments