ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11948
42
0

Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis

15 March 2025
Thivya Thogesan
A. Nugaliyadde
K. Wong
ArXivPDFHTML
Abstract

Interpretability remains a key difficulty in sentiment analysis with Large Language Models (LLMs), particularly in high-stakes applications where it is crucial to comprehend the rationale behind forecasts. This research addressed this by introducing a technique that applies SHAP (Shapley Additive Explanations) by breaking down LLMs into components such as embedding layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge of sentiment prediction. The approach offers a clearer overview of how model interpret and categorise sentiment by breaking down LLMs into these parts. The method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset, which shows how different sentences affect different layers. The effectiveness of layer-wise SHAP analysis in clarifying sentiment-specific token attributions is demonstrated by experimental evaluations, which provide a notable enhancement over current whole-model explainability techniques. These results highlight how the suggested approach could improve the reliability and transparency of LLM-based sentiment analysis in crucial applications.

View on arXiv
@article{thogesan2025_2503.11948,
  title={ Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis },
  author={ Thivya Thogesan and Anupiya Nugaliyadde and Kok Wai Wong },
  journal={arXiv preprint arXiv:2503.11948},
  year={ 2025 }
}
Comments on this paper