ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21820
29
0

UFM: Unified Feature Matching Pre-training with Multi-Modal Image Assistants

26 March 2025
Yide Di
Yun Liao
Hao Zhou
Kaijun Zhu
Qing Duan
Junhui Liu
Mingyu Lu
ArXivPDFHTML
Abstract

Image feature matching, a foundational task in computer vision, remains challenging for multimodal image applications, often necessitating intricate training on specific datasets. In this paper, we introduce a Unified Feature Matching pre-trained model (UFM) designed to address feature matching challenges across a wide spectrum of modal images. We present Multimodal Image Assistant (MIA) transformers, finely tunable structures adept at handling diverse feature matching problems. UFM exhibits versatility in addressing both feature matching tasks within the same modal and those across different modals. Additionally, we propose a data augmentation algorithm and a staged pre-training strategy to effectively tackle challenges arising from sparse data in specific modals and imbalanced modal datasets. Experimental results demonstrate that UFM excels in generalization and performance across various feature matching tasks. The code will be released at:this https URL.

View on arXiv
@article{di2025_2503.21820,
  title={ UFM: Unified Feature Matching Pre-training with Multi-Modal Image Assistants },
  author={ Yide Di and Yun Liao and Hao Zhou and Kaijun Zhu and Qing Duan and Junhui Liu and Mingyu Lu },
  journal={arXiv preprint arXiv:2503.21820},
  year={ 2025 }
}
Comments on this paper