ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13631
29
0

Multi-modal Knowledge Graph Generation with Semantics-enriched Prompts

18 April 2025
Yajing Xu
Zhiqiang Liu
Jiaoyan Chen
Mingchen Tu
Z. Chen
Jeff Z. Pan
Yichi Zhang
Yushan Zhu
Wen Zhang
H. Chen
ArXivPDFHTML
Abstract

Multi-modal Knowledge Graphs (MMKGs) have been widely applied across various domains for knowledge representation. However, the existing MMKGs are significantly fewer than required, and their construction faces numerous challenges, particularly in ensuring the selection of high-quality, contextually relevant images for knowledge graph enrichment. To address these challenges, we present a framework for constructing MMKGs from conventional KGs. Furthermore, to generate higher-quality images that are more relevant to the context in the given knowledge graph, we designed a neighbor selection method called Visualizable Structural Neighbor Selection (VSNS). This method consists of two modules: Visualizable Neighbor Selection (VNS) and Structural Neighbor Selection (SNS). The VNS module filters relations that are difficult to visualize, while the SNS module selects neighbors that most effectively capture the structural characteristics of the entity. To evaluate the quality of the generated images, we performed qualitative and quantitative evaluations on two datasets, MKG-Y and DB15K. The experimental results indicate that using the VSNS method to select neighbors results in higher-quality images that are more relevant to the knowledge graph.

View on arXiv
@article{xu2025_2504.13631,
  title={ Multi-modal Knowledge Graph Generation with Semantics-enriched Prompts },
  author={ Yajing Xu and Zhiqiang Liu and Jiaoyan Chen and Mingchen Tu and Zhuo Chen and Jeff Z. Pan and Yichi Zhang and Yushan Zhu and Wen Zhang and Huajun Chen },
  journal={arXiv preprint arXiv:2504.13631},
  year={ 2025 }
}
Comments on this paper