ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02859
17
0

Enhancing ML Model Interpretability: Leveraging Fine-Tuned Large Language Models for Better Understanding of AI

2 May 2025
Jonas Bokstaller
Julia Altheimer
Julian Dormehl
Alina Buss
Jasper Wiltfang
Johannes Schneider
Maximilian Röglinger
ArXivPDFHTML
Abstract

Across various sectors applications of eXplainableAI (XAI) gained momentum as the increasing black-boxedness of prevailing Machine Learning (ML) models became apparent. In parallel, Large Language Models (LLMs) significantly developed in their abilities to understand human language and complex patterns. By combining both, this paper presents a novel reference architecture for the interpretation of XAI through an interactive chatbot powered by a fine-tuned LLM. We instantiate the reference architecture in the context of State-of-Health (SoH) prediction for batteries and validate its design in multiple evaluation and demonstration rounds. The evaluation indicates that the implemented prototype enhances the human interpretability of ML, especially for users with less experience with XAI.

View on arXiv
@article{bokstaller2025_2505.02859,
  title={ Enhancing ML Model Interpretability: Leveraging Fine-Tuned Large Language Models for Better Understanding of AI },
  author={ Jonas Bokstaller and Julia Altheimer and Julian Dormehl and Alina Buss and Jasper Wiltfang and Johannes Schneider and Maximilian Röglinger },
  journal={arXiv preprint arXiv:2505.02859},
  year={ 2025 }
}
Comments on this paper