136
v1v2 (latest)

Probing Information Distribution in Transformer Architectures through Entropy Analysis

Main:9 Pages
5 Figures
Bibliography:2 Pages
Abstract

This work explores entropy analysis as a tool for probing information distribution within Transformer-based architectures. By quantifying token-level uncertainty and examining entropy patterns across different stages of processing, we aim to investigate how information is managed and transformed within these models. As a case study, we apply the methodology to a GPT-based large language model, illustrating its potential to reveal insights into model behavior and internal representations. This approach may offer insights into model behavior and contribute to the development of interpretability and evaluation frameworks for transformer-based models

View on arXiv
Comments on this paper