ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.14867
48
0

DVHGNN: Multi-Scale Dilated Vision HGNN for Efficient Vision Recognition

19 March 2025
Caoshuo Li
Tanzhe Li
Xiaobin Hu
Donghao Luo
Taisong Jin
ArXivPDFHTML
Abstract

Recently, Vision Graph Neural Network (ViG) has gained considerable attention in computer vision. Despite its groundbreaking innovation, Vision Graph Neural Network encounters key issues including the quadratic computational complexity caused by its K-Nearest Neighbor (KNN) graph construction and the limitation of pairwise relations of normal graphs. To address the aforementioned challenges, we propose a novel vision architecture, termed Dilated Vision HyperGraph Neural Network (DVHGNN), which is designed to leverage multi-scale hypergraph to efficiently capture high-order correlations among objects. Specifically, the proposed method tailors Clustering and Dilated HyperGraph Construction (DHGC) to adaptively capture multi-scale dependencies among the data samples. Furthermore, a dynamic hypergraph convolution mechanism is proposed to facilitate adaptive feature exchange and fusion at the hypergraph level. Extensive qualitative and quantitative evaluations of the benchmark image datasets demonstrate that the proposed DVHGNN significantly outperforms the state-of-the-art vision backbones. For instance, our DVHGNN-S achieves an impressive top-1 accuracy of 83.1% on ImageNet-1K, surpassing ViG-S by +1.0% and ViHGNN-S by +0.6%.

View on arXiv
@article{li2025_2503.14867,
  title={ DVHGNN: Multi-Scale Dilated Vision HGNN for Efficient Vision Recognition },
  author={ Caoshuo Li and Tanzhe Li and Xiaobin Hu and Donghao Luo and Taisong Jin },
  journal={arXiv preprint arXiv:2503.14867},
  year={ 2025 }
}
Comments on this paper