ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.05242
51
1

SEER: Self-Explainability Enhancement of Large Language Models' Representations

7 February 2025
Guanxu Chen
Dongrui Liu
Tao Luo
Jing Shao
    LRM
    MILM
ArXivPDFHTML
Abstract

Explaining the hidden representations of Large Language Models (LLMs) is a perspective to understand LLMs' underlying inference logic and improve their reliability in application scenarios. However, previous methods introduce external ''black-box'' modules to explain ''black-box'' LLMs, increasing the potential uncertainty and failing to provide faithful explanations. In this paper, we propose a self-explaining method SEER, enhancing LLMs' explainability by aggregating the same concept and disentangling the different concepts in the representation space. In this way, SEER provides faithful explanations carried by representations synchronously with the LLMs' output. Additionally, we showcase the applications of SEER on trustworthiness-related tasks (e.g., the safety risks classification and detoxification tasks), where self-explained LLMs achieve consistent improvement in explainability and performance. More crucially, we theoretically analyze the improvement of SEER on LLMs' generalization ability through optimal transport theory.

View on arXiv
@article{chen2025_2502.05242,
  title={ SEER: Self-Explainability Enhancement of Large Language Models' Representations },
  author={ Guanxu Chen and Dongrui Liu and Tao Luo and Jing Shao },
  journal={arXiv preprint arXiv:2502.05242},
  year={ 2025 }
}
Comments on this paper