Moderately Supervised Learning: Definition, Framework and Generality
- OffRL
Learning with supervision has achieved remarkable success in numerous artificial intelligence (AI) applications. In the current literature, by referring to the properties of the labels prepared for the training data set, learning with supervision is categorized as supervised learning (SL) and weakly supervised learning (WSL). SL concerns the situation where the training data set is assigned with ideal labels, while WSL concerns the situation where the training data set is assigned with non-ideal labels. However, without considering the properties of the transformation from the given labels to learnable targets, the definition of SL is relatively abstract, which conceals some details that can be critical to building the appropriate solutions for specific SL tasks. Thus, it is desirable to reveal these details more concretely. This article attempts to achieve this goal by expanding the categorization of SL and investigating the sub-type that plays the central role in SL. More specifically, taking into consideration the properties of the transformation from the given labels to learnable targets, we firstly categorize SL into three narrower sub-types. Then we focus on the moderately supervised learning (MSL) sub-type that concerns the situation where the given labels are ideal, but due to the simplicity in annotation, careful designs are required to transform the given labels into learnable targets. From the perspectives of the definition, framework and generality, we comprehensively illustrate MSL and reveal what details are concealed by the abstractness of the definition of SL. At the meantime, the whole presentation of this paper as well establishes a tutorial for AI application engineers to refer to viewing a problem to be solved from the mathematicians' vision.
View on arXiv