ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.11368
22
48

First-order Newton-type Estimator for Distributed Estimation and Inference

28 November 2018
Xi Chen
Weidong Liu
Yichen Zhang
ArXivPDFHTML
Abstract

This paper studies distributed estimation and inference for a general statistical problem with a convex loss that could be non-differentiable. For the purpose of efficient computation, we restrict ourselves to stochastic first-order optimization, which enjoys low per-iteration complexity. To motivate the proposed method, we first investigate the theoretical properties of a straightforward Divide-and-Conquer Stochastic Gradient Descent (DC-SGD) approach. Our theory shows that there is a restriction on the number of machines and this restriction becomes more stringent when the dimension ppp is large. To overcome this limitation, this paper proposes a new multi-round distributed estimation procedure that approximates the Newton step only using stochastic subgradient. The key component in our method is the proposal of a computationally efficient estimator of Σ−1w\Sigma^{-1} wΣ−1w, where Σ\SigmaΣ is the population Hessian matrix and www is any given vector. Instead of estimating Σ\SigmaΣ (or Σ−1\Sigma^{-1}Σ−1) that usually requires the second-order differentiability of the loss, the proposed First-Order Newton-type Estimator (FONE) directly estimates the vector of interest Σ−1w\Sigma^{-1} wΣ−1w as a whole and is applicable to non-differentiable losses. Our estimator also facilitates the inference for the empirical risk minimizer. It turns out that the key term in the limiting covariance has the form of Σ−1w\Sigma^{-1} wΣ−1w, which can be estimated by FONE.

View on arXiv
Comments on this paper