Sign In

EditAnything

Updated: Apr 18, 2026

conceptanythingaddconverteditreplace

Verified:

SafeTensor

What did you think of this resource?

Type

LoRA

Stats

580

0

Reviews

Published

Apr 18, 2026

Base Model

LTXV 2.3

Training

Steps: 9,000

Usage Tips

Strength: 1

Trigger Words

Add a/an [subject/object] with [attributes], [location in the scene].
Remove the [subject/object] [location or identifying description].
Replace the [original subject/object] [location] with a/an [new subject/object] with [attributes].
Convert the video into a [style name] style.

Hash

AutoV2
36721B3988
default creator card background decoration
Bronze Concept Badge
NRDX's Avatar

NRDX

This model was trained on 8,000 video pairs, and training is still ongoing for a few thousand more steps. It is still experimental, not trained with a fully professional production target, and the model may be updated unexpectedly as new checkpoints.

The current goal is not final polished production quality, but to explore:

  • edit-anything behavior

  • prompt-following

  • inference tradeoffs

  • synthetic dataset building, especially for style data

The model was trained around four main prompt patterns:

Add
Add a/an [subject/object] with [clear visual attributes], [precise location in the scene].

Remove
Remove the [subject/object] [location or identifying description].

Replace
Replace the [original subject/object] [location] with a/an [new subject/object] with [clear visual attributes].

Convert / Style
Convert the video into a [style name] style.

Workflow URL: https://huggingface.co/Alissonerdx/LTX-LoRAs/blob/main/workflows/ltx23_edit_anything_v1.json

One important thing during inference is CFG.

A good starting point is testing a distilled setup with CFG = 1. If the edit feels too weak or the model is not following the prompt well enough, increasing CFG can be the key. In some cases, increasing the distill LoRA strength to around 1.2 can also help.

The workflow is also not fully optimized yet. It still needs more testing to find the best combination of:

  • CFG

  • LoRA strength

  • number of steps

  • model combinations

It may also be interesting to combine this model with other models and see what kinds of results emerge.

If you can test it, please share your findings. Feedback on prompt behavior, edit strength, consistency, style transfer, and failure cases would be very helpful while training is still in progress.

Another very important thing is that the Removal task should have a very clear direction indicating where you want to remove what you want to remove.

Examples:

Remove the black robot sitting at the table.

Remove the person riding the electric scooter on the left.

Remove the person with glasses and the microphone in the foreground.

Remove the image of the green trees on the top left.

Remove the woman and the smoking bottle.

For example, if the object are in front, use foreground ... background, left, right, top, bottom.

If this model was helpful to you in any way, please consider helping me continue creating more model for the price of a coffee.