452
v1v2v3v4 (latest)

VerifiableFL: Verifiable Claims for Federated Learning using Exclaves

Main:12 Pages
12 Figures
Bibliography:3 Pages
3 Tables
Abstract

In federated learning (FL), data providers jointly train a machine learning model without sharing their training data. This makes it challenging to provide verifiable claims about the trained FL model, e.g., related to the employed training data, any data sanitization, or the correct training algorithm-a malicious data provider can simply deviate from the correct training protocol without detection. While prior FL training systems have explored the use of trusted execution environments (TEEs) to protect the training computation, such approaches rely on the confidentiality and integrity of TEEs. The confidentiality guarantees of TEEs, however, have been shown to be vulnerable to a wide range of attacks, such as side-channel attacks. We describe VerifiableFL, a system for training FL models that establishes verifiable claims about trained FL models with the help of fine-grained runtime attestation proofs. Since these runtime attestation proofs only require integrity protection, VerifiableFL generates them using the new abstraction of exclaves. Exclaves are integrity-only execution environments, which do not contain software-managed secrets and thus are immune to data leakage attacks. VerifiableFL uses exclaves to attest individual data transformations during FL training without relying on confidentiality guarantees. The runtime attestation proofs then form an attested dataflow graph of the entire FL model training computation. The graph is checked by an auditor to ensure that the trained FL model satisfies its claims, such as the use of data sanitization by data providers or correct aggregation by the model provider. VerifiableFL extends NVFlare FL framework to use exclaves. We show that VerifiableFL introduces less than 12% overhead compared to unprotected FL training.

View on arXiv
Comments on this paper