Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance
Kiwi-Edit is a versatile video editing framework built on an MLLM encoder and a video Diffusion Transformer (DiT). It supports both natural language instruction-based video editing and combined reference image + instruction video editing.
[Paper] [Project Page] [GitHub]
Introduction
Instruction-based video editing has witnessed rapid progress, yet current methods often struggle with precise visual control. Kiwi-Edit introduces a unified editing architecture that synergizes learnable queries and latent visual features for reference semantic guidance. By leveraging a scalable data generation pipeline and the RefVIE dataset, the model achieves significant gains in instruction following and reference fidelity, establishing a new state-of-the-art in controllable video editing.
Quick Start
Installation (Diffusers Environment)
# Create conda environment
conda create -n diffusers python=3.10 -y
conda activate diffusers
# Install PyTorch 2.7 with CUDA
pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
pip install diffusers decord einops accelerate transformers==4.57.0 opencv-python av
Inference Sample
You can run a quick test on a demo video using the script provided in the official repository:
python diffusers_demo.py \
--video_path ./demo_data/video/source/0005e4ad9f49814db1d3f2296b911abf.mp4 \
--prompt "Remove the monkey." \
--save_path output.mp4 --model_path linyq/kiwi-edit-5b-instruct-only-diffusers
Citation
If you use Kiwi-Edit in your research, please cite the following paper:
@misc{kiwiedit,
title={Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance},
author={Yiqi Lin and Guoqiang Liang and Ziyun Zeng and Zechen Bai and Yanzhe Chen and Mike Zheng Shou},
year={2026},
eprint={2603.02175},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.02175},
}
- Downloads last month
- 125