Scott Howard
09/19/2025, 12:16 AMFine Tuning Friday
we’re working with Qwen & WAN2.2 - Multi Character Video Generation!
We’re creating a fine-tuning pipeline which includes:
1) Segment out the characters in the scene (using DinoV3)
2) Train a Qwen-Image-Edit LoRA for each color to go from mask -> character in correct location
3) Use WAN Image2Video to turn the still frame into a video
Join us if you want to see if we can get consistent generations of Conan O’Brien interviewing Will Smith :)
Have you been fine-tuning video/image models recently? If so join! We’d love to see you work.
https://luma.com/fine-tuning-friday-8