ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22374
24
0

ViSketch-GPT: Collaborative Multi-Scale Feature Extraction for Sketch Recognition and Generation

28 March 2025
Giulio Federico
Giuseppe Amato
F. Carrara
Claudio Gennaro
Marco Di Benedetto
ArXivPDFHTML
Abstract

Understanding the nature of human sketches is challenging because of the wide variation in how they are created. Recognizing complex structural patterns improves both the accuracy in recognizing sketches and the fidelity of the generated sketches. In this work, we introduce ViSketch-GPT, a novel algorithm designed to address these challenges through a multi-scale context extraction approach. The model captures intricate details at multiple scales and combines them using an ensemble-like mechanism, where the extracted features work collaboratively to enhance the recognition and generation of key details crucial for classification and generation tasks.The effectiveness of ViSketch-GPT is validated through extensive experiments on the QuickDraw dataset. Our model establishes a new benchmark, significantly outperforming existing methods in both classification and generation tasks, with substantial improvements in accuracy and the fidelity of generated sketches.The proposed algorithm offers a robust framework for understanding complex structures by extracting features that collaborate to recognize intricate details, enhancing the understanding of structures like sketches and making it a versatile tool for various applications in computer vision and machine learning.

View on arXiv
@article{federico2025_2503.22374,
  title={ ViSketch-GPT: Collaborative Multi-Scale Feature Extraction for Sketch Recognition and Generation },
  author={ Giulio Federico and Giuseppe Amato and Fabio Carrara and Claudio Gennaro and Marco Di Benedetto },
  journal={arXiv preprint arXiv:2503.22374},
  year={ 2025 }
}
Comments on this paper