ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.10537
459
0
v1v2v3v4 (latest)

VerifiableFL: Verifiable Claims for Federated Learning using Exclaves

13 December 2024
Jinnan Guo
Kapil Vaswani
Andrew Paverd
Peter R. Pietzuch
    FedML
ArXiv (abs)PDFHTMLGithub
Main:12 Pages
12 Figures
Bibliography:3 Pages
3 Tables
Abstract

In federated learning (FL), data providers jointly train a machine learning model without sharing their training data. This makes it challenging to provide verifiable claims about properties of the final trained FL model, e.g., related to the employed training data, the used data sanitization, or the correct training algorithm -- a malicious data provider can simply deviate from the correct training protocol without being detected. While prior FL training systems have explored the use of trusted execution environments (TEEs) to combat such attacks, existing approaches struggle to link attestation proofs from TEEs robustly and effectively with claims about the trained FL model. TEEs have also been shown to suffer from a wide range of attacks, including side-channel attacks.

View on arXiv
Comments on this paper