200

Fully Decentralized Certified Unlearning

Hithem Lamri
Michail Maniatakos
Main:8 Pages
1 Figures
Bibliography:3 Pages
18 Tables
Appendix:15 Pages
Abstract

Machine unlearning (MU) seeks to remove the influence of specified data from a trained model in response to privacy requests or data poisoning. While certified unlearning has been analyzed in centralized and server-orchestrated federated settings (via guarantees analogous to differential privacy, DP), the decentralized setting -- where peers communicate without a coordinator remains underexplored. We study certified unlearning in decentralized networks with fixed topologies and propose RR-DU, a random-walk procedure that performs one projected gradient ascent step on the forget set at the unlearning client and a geometrically distributed number of projected descent steps on the retained data elsewhere, combined with subsampled Gaussian noise and projection onto a trust region around the original model. We provide (i) convergence guarantees in the convex case and stationarity guarantees in the nonconvex case, (ii) (ε,δ)(\varepsilon,\delta) network-unlearning certificates on client views via subsampled Gaussian ReˊnyiRényi DP (RDP) with segment-level subsampling, and (iii) deletion-capacity bounds that scale with the forget-to-local data ratio and quantify the effect of decentralization (network mixing and randomized subsampling) on the privacy--utility trade-off. Empirically, on image benchmarks (MNIST, CIFAR-10), RR-DU matches a given (ε,δ)(\varepsilon,\delta) while achieving higher test accuracy than decentralized DP baselines and reducing forget accuracy to random guessing (10%\approx 10\%).

View on arXiv
Comments on this paper