47
0

UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface

Abstract

Generalist models have achieved remarkable success in both language and vision-language tasks, showcasing the potential of unified modeling. However, effectively integrating fine-grained perception tasks like detection and segmentation into these models remains a significant challenge. This is primarily because these tasks often rely heavily on task-specific designs and architectures that can complicate the modeling process. To address this challenge, we present \ours, a framework that \textbf{U}nifies \textbf{F}ine-grained visual perception tasks through an \textbf{O}pen-ended language interface. By transforming all perception targets into the language space, \ours unifies object-level detection, pixel-level segmentation, and image-level vision-language tasks into a single model. Additionally, we introduce a novel embedding retrieval approach that relies solely on the language interface to support segmentation tasks. Our framework bridges the gap between fine-grained perception and vision-language tasks, significantly simplifying architectural design and training strategies while achieving comparable or superior performance to methods with intricate task-specific designs. After multi-task training on five standard visual perception datasets, \ours outperforms the previous state-of-the-art generalist models by 12.3 mAP on COCO instance segmentation and 3.3 mIoU on ADE20K semantic segmentation. Furthermore, our method seamlessly integrates with existing MLLMs, effectively combining fine-grained perception capabilities with their advanced language abilities, thereby enabling more challenging tasks such as reasoning segmentation. Code and models are available atthis https URL.

View on arXiv
@article{tang2025_2503.01342,
  title={ UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface },
  author={ Hao Tang and Chenwei Xie and Haiyang Wang and Xiaoyi Bao and Tingyu Weng and Pandeng Li and Yun Zheng and Liwei Wang },
  journal={arXiv preprint arXiv:2503.01342},
  year={ 2025 }
}
Comments on this paper