5
0

The Gauss-Markov Adjunction: Categorical Semantics of Residuals in Supervised Learning

Moto Kamiura
Main:13 Pages
Bibliography:3 Pages
Abstract

Enhancing the intelligibility and interpretability of machine learning is a crucial task in responding to the demand for Explicability as an AI principle, and in promoting the better social implementation of AI. The aim of our research is to contribute to this improvement by reformulating machine learning models through the lens of category theory, thereby developing a semantic framework for structuring and understanding AI systems. Our categorical modeling in this paper clarifies and formalizes the structural interplay between residuals and parameters in supervised learning. The present paper focuses on the multiple linear regression model, which represents the most basic form of supervised learning. By defining two concrete categories corresponding to parameters and data, along with an adjoint pair of functors between them, we introduce our categorical formulation of supervised learning. We show that the essential structure of this framework is captured by what we call the Gauss-Markov Adjunction. Within this setting, the dual flow of information can be explicitly described as a correspondence between variations in parameters and residuals. The ordinary least squares estimator for the parameters and the minimum residual are related via the preservation of limits by the right adjoint functor. Furthermore, we position this formulation as an instance of extended denotational semantics for supervised learning, and propose applying a semantic perspective developed in theoretical computer science as a formal foundation for Explicability in AI.

View on arXiv
@article{kamiura2025_2507.02442,
  title={ The Gauss-Markov Adjunction: Categorical Semantics of Residuals in Supervised Learning },
  author={ Moto Kamiura },
  journal={arXiv preprint arXiv:2507.02442},
  year={ 2025 }
}
Comments on this paper