Addressing common misinterpretations of KART and UAT in neural network literature
Neural Networks (NN), 2024
- HAI
Main:11 Pages
1 Tables
Appendix:5 Pages
Abstract
This note addresses the Kolmogorov-Arnold Representation Theorem (KART) and the Universal Approximation Theorem (UAT), focusing on their frequent misinterpretations found in the neural network literature. Our remarks aim to support a more accurate understanding of KART and UAT among neural network specialists. In addition, we explore the minimal number of neurons required for universal approximation, showing that the same number of neurons needed for exact representation of functions in KART-based networks also suffices for standard multilayer perceptrons in the context of approximation.
View on arXivComments on this paper
