Addressing common misinterpretations of KART and UAT in neural network literature
Neural Networks (NN), 2024
- HAI
Main:11 Pages
1 Tables
Appendix:5 Pages
Abstract
This note addresses the Kolmogorov-Arnold Representation Theorem (KART) and the Universal Approximation Theorem (UAT), focusing on their common and frequent misinterpretations in many papers related to neural network approximation. Our remarks aim to support a more accurate understanding of KART and UAT among neural network specialists. In addition, we explore the minimal number of neurons required for universal approximation, showing that KART's lower bounds extend to standard multilayer perceptrons, even with smooth activation functions.
View on arXivComments on this paper
