ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09261
12
12

Federated Learning Framework with Straggling Mitigation and Privacy-Awareness for AI-based Mobile Application Services

17 June 2021
Y. Saputra
Diep N. Nguyen
D. Hoang
Viet Quoc Pham
E. Dutkiewicz
W. Hwang
    FedML
ArXivPDFHTML
Abstract

In this work, we propose a novel framework to address straggling and privacy issues for federated learning (FL)-based mobile application services, taking into account limited computing/communications resources at mobile users (MUs)/mobile application provider (MAP), privacy cost, the rationality and incentive competition among MUs in contributing data to the MAP. Particularly, the MAP first determines a set of the best MUs for the FL process based on the MUs' provided information/features. To mitigate straggling problems with privacy-awareness, each selected MU can then encrypt part of local data and upload the encrypted data to the MAP for an encrypted training process, in addition to the local training process. For that, each selected MU can propose a contract to the MAP according to its expected trainable local data and privacy-protected encrypted data. To find the optimal contracts that can maximize utilities of the MAP and all the participating MUs while maintaining high learning quality of the whole system, we first develop a multi-principal one-agent contract-based problem leveraging FL-based multiple utility functions. These utility functions account for the MUs' privacy cost, the MAP's limited computing resources, and asymmetric information between the MAP and MUs. Then, we transform the problem into an equivalent low-complexity problem and develop a light-weight iterative algorithm to effectively find the optimal solutions. Experiments with a real-world dataset show that our framework can speed up training time up to 49% and improve prediction accuracy up to 4.6 times while enhancing the network's social welfare, i.e., total utility of all participating entities, up to 114% under the privacy cost consideration compared with those of baseline methods.

View on arXiv
Comments on this paper