ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18729
39
0

Multimodal graph representation learning for website generation based on visual sketch

25 April 2025
Tung D. Vu
Chung Hoang
Truong-Son Hy
    3DV
ArXivPDFHTML
Abstract

The Design2Code problem, which involves converting digital designs into functional source code, is a significant challenge in software development due to its complexity and time-consuming nature. Traditional approaches often struggle with accurately interpreting the intricate visual details and structural relationships inherent in webpage designs, leading to limitations in automation and efficiency. In this paper, we propose a novel method that leverages multimodal graph representation learning to address these challenges. By integrating both visual and structural information from design sketches, our approach enhances the accuracy and efficiency of code generation, particularly in producing semantically correct and structurally sound HTML code. We present a comprehensive evaluation of our method, demonstrating significant improvements in both accuracy and efficiency compared to existing techniques. Extensive evaluation demonstrates significant improvements of multimodal graph learning over existing techniques, highlighting the potential of our method to revolutionize design-to-code automation. Code available atthis https URL

View on arXiv
@article{vu2025_2504.18729,
  title={ Multimodal graph representation learning for website generation based on visual sketch },
  author={ Tung D. Vu and Chung Hoang and Truong-Son Hy },
  journal={arXiv preprint arXiv:2504.18729},
  year={ 2025 }
}
Comments on this paper