ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.10716
20
5

MOWA: Multiple-in-One Image Warping Model

16 April 2024
Kang Liao
Zongsheng Yue
Zhonghua Wu
Chen Change Loy
    VLM
ArXivPDFHTML
Abstract

While recent image warping approaches achieved remarkable success on existing benchmarks, they still require training separate models for each specific task and cannot generalize well to different camera models or customized manipulations. To address diverse types of warping in practice, we propose a Multiple-in-One image WArping model (named MOWA) in this work. Specifically, we mitigate the difficulty of multi-task learning by disentangling the motion estimation at both the region level and pixel level. To further enable dynamic task-aware image warping, we introduce a lightweight point-based classifier that predicts the task type, serving as prompts to modulate the feature maps for more accurate estimation. To our knowledge, this is the first work that solves multiple practical warping tasks in one single model. Extensive experiments demonstrate that our MOWA, which is trained on six tasks for multiple-in-one image warping, outperforms state-of-the-art task-specific models across most tasks. Moreover, MOWA also exhibits promising potential to generalize into unseen scenes, as evidenced by cross-domain and zero-shot evaluations. The code and more visual results can be found on the project page:this https URL.

View on arXiv
@article{liao2025_2404.10716,
  title={ MOWA: Multiple-in-One Image Warping Model },
  author={ Kang Liao and Zongsheng Yue and Zhonghua Wu and Chen Change Loy },
  journal={arXiv preprint arXiv:2404.10716},
  year={ 2025 }
}
Comments on this paper