ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00780
37
1

Enhanced Multi-Class Classification of Gastrointestinal Endoscopic Images with Interpretable Deep Learning Model

2 March 2025
Astitva Kamble
Vani Bandodkar
Saakshi Dharmadhikary
Veena Anand
Pradyut Kumar Sanki
Mei X. Wu
Biswabandhu Jana
ArXivPDFHTML
Abstract

Endoscopy serves as an essential procedure for evaluating the gastrointestinal (GI) tract and plays a pivotal role in identifying GI-related disorders. Recent advancements in deep learning have demonstrated substantial progress in detecting abnormalities through intricate models and data augmentationthis http URLresearch introduces a novel approach to enhance classification accuracy using 8,000 labeled endoscopic images from the Kvasir dataset, categorized into eight distinct classes. Leveraging EfficientNetB3 as the backbone, the proposed architecture eliminates reliance on data augmentation while preserving moderate model complexity. The model achieves a test accuracy of 94.25%, alongside precision and recall of 94.29% and 94.24% respectively. Furthermore, Local Interpretable Model-agnostic Explanation (LIME) saliency maps are employed to enhance interpretability by defining critical regions in the images that influenced model predictions. Overall, this work highlights the importance of AI in advancing medical imaging by combining high classification accuracy with interpretability.

View on arXiv
@article{kamble2025_2503.00780,
  title={ Enhanced Multi-Class Classification of Gastrointestinal Endoscopic Images with Interpretable Deep Learning Model },
  author={ Astitva Kamble and Vani Bandodkar and Saakshi Dharmadhikary and Veena Anand and Pradyut Kumar Sanki and Mei X. Wu and Biswabandhu Jana },
  journal={arXiv preprint arXiv:2503.00780},
  year={ 2025 }
}
Comments on this paper