One of the main barriers to adoption of Machine Learning (ML) is that ML models can fail unexpectedly. In this work, we aim to provide practitioners a guide to better understand why ML models fail and equip them with techniques they can use to reason about failure. Specifically, we discuss failure as either being caused by lack of reliability or lack of robustness. Differentiating the causes of failure in this way allows us to formally define why models fail from first principles and tie these definitions to engineering concepts and real-world deployment settings. Throughout the document we provide 1) a summary of important theoretic concepts in reliability and robustness, 2) a sampling current techniques that practitioners can utilize to reason about ML model reliability and robustness, and 3) examples that show how these concepts and techniques can apply to real-world settings.
View on arXiv@article{heim2025_2503.00563, title={ A Guide to Failure in Machine Learning: Reliability and Robustness from Foundations to Practice }, author={ Eric Heim and Oren Wright and David Shriver }, journal={arXiv preprint arXiv:2503.00563}, year={ 2025 } }