201
v1v2 (latest)

Interpret the Internal States of Recommendation Model with Sparse Autoencoder

Main:8 Pages
6 Figures
Bibliography:2 Pages
9 Tables
Abstract

Recommendation model interpretation aims to reveal models' calculation process, enhancing their transparency, interpretability, and trustworthiness by clarifying the relationships between inputs, model activations, and outputs. However, the complex, often opaque nature of deep learning models complicates interpretation, and most existing methods are tailored to specific model architectures, limiting their generalizability across different types of recommendation models. To address these challenges, we propose RecSAE, an automated and generalizable probing framework that interprets Recommenders with Sparse AutoEncoder. It extracts interpretable latents from the internal states of recommendation models and links them to semantic concepts for interpretation. RecSAE does not alter original models during interpretation and also enables targeted de-biasing to models based on interpreted results. Specifically, RecSAE operates in three steps: First, it probes activations before the prediction layer to capture internal representations. Next, the RecSAE module is trained on these activations with a larger latent space and sparsity constraints, making the RecSAE latents more mono-semantic than the original model activations. Thirdly, RecSAE utilizes a language model to construct concept descriptions with confidence scores based on the relationships between latent activations and recommendation outputs. Experiments on three types of models (general, graph-based, and sequential) with three widely used datasets demonstrate the effectiveness and generalization of RecSAE framework. The interpreted concepts are further validated by human experts, showing strong alignment with human perception. Overall, RecSAE serves as a novel step in both model-level interpretations to various types of recommenders without affecting their functions and offering the potential for targeted tuning of models.

View on arXiv
Comments on this paper