Towards Unified Keyframe Propagation Models

Publication
Patrick Esser1, Peter Michael1, 2, Soumyadip Sengupta2
Abstract
Many video editing tasks such as rotoscoping or object removal require the propagation of context across frames. While transformers and other attention-based approaches that aggregate features globally have demonstrated great success at propagating object masks from keyframes to the whole video, they struggle to propagate high-frequency details such as textures faithfully. We hypothesize that this is due to an inherent bias of global attention towards lowfrequency features. To overcome this limitation, we present a two-stream approach, where high-frequency features interact locally and low-frequency features interact globally. The global interaction stream remains robust in difficult situations such as large camera motions, where explicit alignment fails. The local interaction stream propagates highfrequency details through deformable feature aggregation and, informed by the global interaction stream, learns to detect and correct errors of the deformation field. We evaluate our two-stream approach for inpainting tasks, where experiments show that it improves both the propagation of features within a single frame as required for image inpainting, as well as their propagation from keyframes to target frames. Applied to video inpainting, our approach leads to 44% and 26% improvements in FID and LPIPS scores.
Guided Inpainting
Figure 1. Single image inpainting approaches such as LaMa [27] (third col.) cannot propagate context from keyframes (second col.) to a target frame (first col.). By aggregating features globally across frames, transformer-based approaches (fourth col.) can propagate coarse context about a blue object that is visible in the keyframes but not in the target frame. However, it fails to propagate high-frequency details about object locations and textures, resulting in repetitive patterns and artifacts. By modeling both local and global interactions within and across frames, our approach (last col.) successfully propagates high-frequency context and accurately reconstructs the background.
Video Results