16
0

Explainable Artificial Intelligence techniques for interpretation of food datasets: a review

Abstract

Artificial Intelligence (AI) has become essential for analyzing complex data and solving highly-challenging tasks. It is being applied across numerous disciplines beyond computer science, including Food Engineering, where there is a growing demand for accurate and trustworthy predictions to meet stringent food quality standards. However, this requires increasingly complex AI models, raising reliability concerns. In response, eXplainable AI (XAI) has emerged to provide insights into AI decision-making, aiding model interpretation by developers and users. Nevertheless, XAI remains underutilized in Food Engineering, limiting model reliability. For instance, in food quality control, AI models using spectral imaging can detect contaminants or assess freshness levels, but their opaque decision-making process hinders adoption. XAI techniques such as SHAP (Shapley Additive Explanations) and Grad-CAM (Gradient-weighted Class Activation Mapping) can pinpoint which spectral wavelengths or image regions contribute most to a prediction, enhancing transparency and aiding quality control inspectors in verifying AI-generated assessments. This survey presents a taxonomy for classifying food quality research using XAI techniques, organized by data types and explanation methods, to guide researchers in choosing suitable approaches. We also highlight trends, challenges, and opportunities to encourage the adoption of XAI in Food Engineering.

View on arXiv
@article{arrighi2025_2504.10527,
  title={ Explainable Artificial Intelligence techniques for interpretation of food datasets: a review },
  author={ Leonardo Arrighi and Ingrid Alves de Moraes and Marco Zullich and Michele Simonato and Douglas Fernandes Barbin and Sylvio Barbon Junior },
  journal={arXiv preprint arXiv:2504.10527},
  year={ 2025 }
}
Comments on this paper