Local feature-based explanations are a key component of the XAI toolkit. These explanations compute feature importance values relative to an ``interpretable'' feature representation. In tabular data, feature values themselves are often considered interpretable. This paper examines the impact of data engineering choices on local feature-based explanations. We demonstrate that simple, common data engineering techniques, such as representing age with a histogram or encoding race in a specific way, can manipulate feature importance as determined by popular methods like SHAP. Notably, the sensitivity of explanations to feature representation can be exploited by adversaries to obscure issues like discrimination. While the intuition behind these results is straightforward, their systematic exploration has been lacking. Previous work has focused on adversarial attacks on feature-based explainers by biasing data or manipulating models. To the best of our knowledge, this is the first study demonstrating that explainers can be misled by standard, seemingly innocuous data engineering techniques.
View on arXiv@article{hwang2025_2505.08345, title={ SHAP-based Explanations are Sensitive to Feature Representation }, author={ Hyunseung Hwang and Andrew Bell and Joao Fonseca and Venetia Pliatsika and Julia Stoyanovich and Steven Euijong Whang }, journal={arXiv preprint arXiv:2505.08345}, year={ 2025 } }