ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.16926
71
0

Context-Aware Input Orchestration for Video Inpainting

25 November 2024
Hoyoung Kim
Azimbek Khudoyberdiev
Seonghwan Jeong
Jihoon Ryoo
ArXivPDFHTML
Abstract

Traditional neural network-driven inpainting methods struggle to deliver high-quality results within the constraints of mobile device processing power and memory. Our research introduces an innovative approach to optimize memory usage by altering the composition of input data. Typically, video inpainting relies on a predetermined set of input frames, such as neighboring and reference frames, often limited to five-frame sets. Our focus is to examine how varying the proportion of these input frames impacts the quality of the inpainted video. By dynamically adjusting the input frame composition based on optical flow and changes of the mask, we have observed an improvement in various contents including rapid visual context changes.

View on arXiv
@article{kim2025_2411.16926,
  title={ Context-Aware Input Orchestration for Video Inpainting },
  author={ Hoyoung Kim and Azimbek Khudoyberdiev and Seonghwan Jeong and Jihoon Ryoo },
  journal={arXiv preprint arXiv:2411.16926},
  year={ 2025 }
}
Comments on this paper