ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.06328
9
2

Manifold Regularized Discriminative Neural Networks

19 November 2015
Shuangfei Zhai
Zhongfei Zhang
    AI4CE
ArXivPDFHTML
Abstract

Unregularized deep neural networks (DNNs) can be easily overfit with a limited sample size. We argue that this is mostly due to the disriminative nature of DNNs which directly model the conditional probability (or score) of labels given the input. The ignorance of input distribution makes DNNs difficult to generalize to unseen data. Recent advances in regularization techniques, such as pretraining and dropout, indicate that modeling input data distribution (either explicitly or implicitly) greatly improves the generalization ability of a DNN. In this work, we explore the manifold hypothesis which assumes that instances within the same class lie in a smooth manifold. We accordingly propose two simple regularizers to a standard discriminative DNN. The first one, named Label-Aware Manifold Regularization, assumes the availability of labels and penalizes large norms of the loss function w.r.t. data points. The second one, named Label-Independent Manifold Regularization, does not use label information and instead penalizes the Frobenius norm of the Jacobian matrix of prediction scores w.r.t. data points, which makes semi-supervised learning possible. We perform extensive control experiments on fully supervised and semi-supervised tasks using the MNIST, CIFAR10 and SVHN datasets and achieve excellent results.

View on arXiv
Comments on this paper