ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.14396
52
103

SEED-X: Multimodal Models with Unified Multi-granularity Comprehension and Generation

22 April 2024
Yuying Ge
Sijie Zhao
Jinguo Zhu
Yixiao Ge
Kun Yi
Lin Song
Chen Li
Xiaohan Ding
Ying Shan
    VLM
ArXivPDFHTML
Abstract

The rapid evolution of multimodal foundation model has demonstrated significant progresses in vision-language understanding and generation, e.g., our previous work SEED-LLaMA. However, there remains a gap between its capability and the real-world applicability, primarily due to the model's limited capacity to effectively respond to various user instructions and interact with diverse visual data. In this work, we focus on bridging this gap through integrating two enhanced features: (1) comprehending images of arbitrary sizes and ratios, and (2) enabling multi-granularity image generation. We present a unified and versatile foundation model, namely, SEED-X, which is able to model multi-granularity visual semantics for comprehension and generation tasks. Besides the competitive results on public benchmarks, SEED-X demonstrates its effectiveness in handling real-world applications across various domains after instruction tuning. We hope that our work will inspire future research into what can be achieved by versatile multimodal foundation models in real-world applications. The models, codes, and datasets are released inthis https URL.

View on arXiv
@article{ge2025_2404.14396,
  title={ SEED-X: Multimodal Models with Unified Multi-granularity Comprehension and Generation },
  author={ Yuying Ge and Sijie Zhao and Jinguo Zhu and Yixiao Ge and Kun Yi and Lin Song and Chen Li and Xiaohan Ding and Ying Shan },
  journal={arXiv preprint arXiv:2404.14396},
  year={ 2025 }
}
Comments on this paper