Okay so I though this process would be hard and it is and was but its actually very easy!!

First let’s talk about the Bad

Training AnimateDiff Motion LoRAs when I researched was linked to this repository which is called AnimateDiff-MotionDirector. The repo is about a year old with last commit that was done about 4 months ago (from date of this post). The installation steps are pretty similar to any diffusion based app…do a git clone, create a new env (conda create) and the pip install requirements.txt.

Well that’s where it all turns to SHIT!!

AnimateDiff MotionDirector

The installation does not run correctly, there are numerous errors and the repo requires a whole 40GB clone of https://huggingface.co/runwayml/stable-diffusion-v1-5 which is never ending. It doesn’t specify if you need .BIN files or .SAFETENSOR so you are already doubling the download and time to download because the repo contains both version of the one file.

So basically after burning some 3-4 hours on Runpod and a day trying locally on my computer, I gave up this repo and method.

The Good and Very Easy

I start to search online and find this wonderful video. After watching about 2 mins I literally could kick myself. The process couldn’t be easier and its all inside ComfyUI which I’ve been running successfully.

The only Tutorial you need to watch IMHO

The video provides a link to the workflow in its description which gives you a ComfyUI workflow to download. You need to import that into your ComfyUI Install and install any Missing node using ComfyUI Manager

Watch this 3 min quick video on how to fix any missing nodes.

ComfyUI Manager – Install Missing Nodes

The workflow looks like this, its actually very easy to use and the defaults work pretty damn well.

You need these below model and LoRA:

Make sure you “Refresh” your workflow so the models are now seen by ComfyUI, which you will select next.

Training your Motion LoRA

You need a video file that you will be training the Motion LoRA on so upload that to the Load Video node.

Make sure in the model selections you select the model names as highlighted in the ADMD_AdditionalModelSelect node. In the ADMD_CheckpointLoader you can select your choice of SD1.5 model.

Provide a prompt that describes what your video is about (shown below in red) and also provide a prompt for AnimateDiff to generate a resulting video (shown below in green) with the Motion LoRA. Finally give your Motion LoRA a name eg. morpheousC as demonstrated by the orange arrow.

Queue up your workflow and let ComfyUI do its thing. The default workflow is able designed to train on 16 frame of images but on my computer I was able to push it to 24 frames and when pushed further to 32 frames my GPU ran out of memory (Cuda out of memory). The training time are as below on my specs of my local computer.

  • 16 frame training – 20 minutes on RTX4080
  • 24 frame training – 25 minutes on RTX4080

When all the execution is done you will find the Motion LoRA in the folder \models\animatediff_motion_lora under folder created with <Date>\<Time>\<Lora_name>. Here you will find the .SAFETENSORS files each created at 50 steps apart.

The workflow will also produce a comparison video which you can use to review and decide which Motion LoRA you are going to use and keep. With the default settings of the workflow your LoRA file should be 128MB each.

Thanks to this method I’m able to generate Motion LoRA for AnimateDiff within minutes and the results are super good. Check out my socials (blog and personal) for some sample results as well as my store if you want to get my Motion LoRAs.

If you'd like to support our site please consider buying us a Ko-fi, grab a product or subscribe. Need a faster GPU, get access to fastest GPUs for less than $1 per hour with RunPod.io