ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.06231
239
9
v1v2v3v4 (latest)

Mixup Training as the Complexity Reduction

International Conference on Artificial Neural Networks (ICANN), 2020
11 June 2020
Masanari Kimura
ArXiv (abs)PDFHTML
Abstract

Machine learning has achieved remarkable results in recent years due to the increase in the number of data and the development of computational resources. However, despite such excellent performance, machine learning models often suffer from the problem of over-fitting. Many data augmentation methods have been proposed to tackle such a problem, and one of them is called Mixup. Mixup is a recently proposed regularization procedure, which linearly interpolates a random pair of training examples. This regularization method works very well experimentally, but its theoretical guarantee is not fully discussed. In this study, we aim to find out why Mixup works well from the aspect of computational learning theory. In addition, we reveal how the effect of Mixup changes in each situation. Furthermore, we also investigated the effects of changes in the Mixup's parameter. This contributes to the search for the optimal parameters and to estimate the effects of the parameters currently used. The results of this study provide a theoretical clarification of when and how effective regularization by Mixup is.

View on arXiv
Comments on this paper