ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.10967
38
0

Advancing General Multimodal Capability of Vision-language Models with Pyramid-descent Visual Position Encoding

19 January 2025
Z. Chen
Mingxiao Li
Z. Chen
Nan Du
Xiaolong Li
Yuexian Zou
ArXivPDFHTML
Abstract

Vision-language Models (VLMs) have shown remarkable capabilities in advancing general artificial intelligence, yet the irrational encoding of visual positions persists in inhibiting the models' comprehensive perception performance across different levels of granularity. In this work, we propose Pyramid-descent Visual Position Encoding (PyPE), a novel approach designed to enhance the perception of visual tokens within VLMs. By assigning visual position indexes from the periphery to the center and expanding the central receptive field incrementally, PyPE addresses the limitations of traditional raster-scan methods and mitigates the long-term decay effects induced by Rotary Position Embedding (RoPE). Our method reduces the relative distance between interrelated visual elements and instruction tokens, promoting a more rational allocation of attention weights and allowing for a multi-granularity perception of visual elements and countering the over-reliance on anchor tokens. Extensive experimental evaluations demonstrate that PyPE consistently improves the general capabilities of VLMs across various sizes. Code is available atthis https URL.

View on arXiv
@article{chen2025_2501.10967,
  title={ Advancing General Multimodal Capability of Vision-language Models with Pyramid-descent Visual Position Encoding },
  author={ Zhanpeng Chen and Mingxiao Li and Ziyang Chen and Nan Du and Xiaolong Li and Yuexian Zou },
  journal={arXiv preprint arXiv:2501.10967},
  year={ 2025 }
}
Comments on this paper