ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.06880
13
2

Black-Box Generalization: Stability of Zeroth-Order Learning

14 February 2022
Konstantinos E. Nikolakakis
Farzin Haddadpour
Dionysios S. Kalogerias
Amin Karbasi
    MLT
ArXivPDFHTML
Abstract

We provide the first generalization error analysis for black-box learning through derivative-free optimization. Under the assumption of a Lipschitz and smooth unknown loss, we consider the Zeroth-order Stochastic Search (ZoSS) algorithm, that updates a ddd-dimensional model by replacing stochastic gradient directions with stochastic differences of K+1K+1K+1 perturbed loss evaluations per dataset (example) query. For both unbounded and bounded possibly nonconvex losses, we present the first generalization bounds for the ZoSS algorithm. These bounds coincide with those for SGD, and rather surprisingly are independent of ddd, KKK and the batch size mmm, under appropriate choices of a slightly decreased learning rate. For bounded nonconvex losses and a batch size m=1m=1m=1, we additionally show that both generalization error and learning rate are independent of ddd and KKK, and remain essentially the same as for the SGD, even for two function evaluations. Our results extensively extend and consistently recover established results for SGD in prior work, on both generalization bounds and corresponding learning rates. If additionally m=nm=nm=n, where nnn is the dataset size, we derive generalization guarantees for full-batch GD as well.

View on arXiv
Comments on this paper