208

Making Learners (More) Monotone

International Symposium on Intelligent Data Analysis (IDA), 2019
Abstract

Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MTHT\text{MT}_{\text{HT}} makes less than 1%1\% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to several baselines.

View on arXiv
Comments on this paper