Multi-Dimensional Explanation of Ratings from Reviews
Automated predictions require explanations to be interpretable by humans. However, neural methods generally offer little transparency, and interpretability often comes at the cost of performance. In this paper, we consider explaining multi-aspect sentiments with text snippets from reviews, which suffice to make the prediction. Earlier work used attention mechanisms as a way of finding words that predict the sentiment towards a specific aspect and improving recommendation or summarization models. In our work, we propose a neural model that generates, in an unsupervised manner, probabilistic multi-dimensional masks that are interpretable and predict multi-aspect sentiment ratings. We show how using multi-task learning improves both interpretability and F1 scores. Our evaluation shows that on two datasets in different domains, our model outperforms strong baselines and generates masks that are strong feature predictors and have a meaningful interpretation.
View on arXiv