ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07990
25
2

An Algorithm for Learning Smaller Representations of Models With Scarce Data

15 October 2020
Adrian de Wynter
ArXivPDFHTML
Abstract

We present a greedy algorithm for solving binary classification problems in situations where the dataset is either too small or not fully representative of the problem being solved, and obtaining more data is not possible. This algorithm is of particular interest when training small models that have trouble generalizing. It relies on a trained model with loose accuracy constraints, an iterative hyperparameter pruning procedure, and a function used to generate new data. Analysis on correctness and runtime complexity under ideal conditions and an extension to deep neural networks is provided. In the former case we obtain an asymptotic bound of O(∣Θ2∣(log⁡∣Θ∣+∣θ2∣+Tf(∣D∣))+Sˉ∣Θ∣∣E∣)O\left(|\Theta^2|\left(\log{|\Theta|} + |\theta^2| + T_f\left(| D|\right)\right) + \bar{S}|\Theta||{E}|\right)O(∣Θ2∣(log∣Θ∣+∣θ2∣+Tf​(∣D∣))+Sˉ∣Θ∣∣E∣), where ∣Θ∣|{\Theta}|∣Θ∣ is the cardinality of the set of hyperparameters θ\thetaθ to be searched; ∣E∣|{E}|∣E∣ and ∣D∣|{D}|∣D∣ are the sizes of the evaluation and training datasets, respectively; Sˉ\bar{S}Sˉ and fˉ\bar{f}fˉ​ are the inference times for the trained model and the candidate model; and Tf(∣D∣)T_f({|{D}|})Tf​(∣D∣) is a polynomial on ∣D∣|{D}|∣D∣ and fˉ\bar{f}fˉ​. Under these conditions, this algorithm returns a solution that is 1≤r≤2(1−2−∣Θ∣)1 \leq r \leq 2(1 - {2^{-|{\Theta}|}})1≤r≤2(1−2−∣Θ∣) times better than simply enumerating and training with any θ∈Θ\theta \in \Thetaθ∈Θ. As part of our analysis of the generating function we also prove that, under certain assumptions, if an open cover of DDD has the same homology as the manifold where the support of the underlying probability distribution lies, then DDD is learnable, and viceversa.

View on arXiv
Comments on this paper