13
2

Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing

Abstract

We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets D1,,DN{\cal D}_1,\dots,{\cal D}_N for the same learning model fθf_{\theta}. Our objective is to minimize the cumulative deviation of the generated parameters {θi(t)}t=0T\{\theta_i(t)\}_{t=0}^T across all TT iterations from the specialized parameters θ1,,θN\theta^\star_{1},\ldots,\theta^\star_N obtained for each dataset, while respecting the loss function for the model fθ(T)f_{\theta(T)} produced by the algorithm upon halting. We only allow for continual communication between each of the specialized models (nodes/agents) and the central planner (server), at each iteration (round). For the case where the model fθf_{\theta} is a finite-rank kernel regression, we derive explicit updates for the regret-optimal algorithm. By leveraging symmetries within the regret-optimal algorithm, we further develop a nearly regret-optimal heuristic that runs with O(Np2)\mathcal{O}(Np^2) fewer elementary operations, where pp is the dimension of the parameter space. Additionally, we investigate the adversarial robustness of the regret-optimal algorithm showing that an adversary which perturbs qq training pairs by at-most ε>0\varepsilon>0, across all training sets, cannot reduce the regret-optimal algorithm's regret by more than O(εqNˉ1/2)\mathcal{O}(\varepsilon q \bar{N}^{1/2}), where Nˉ\bar{N} is the aggregate number of training pairs. To validate our theoretical findings, we conduct numerical experiments in the context of American option pricing, utilizing a randomly generated finite-rank kernel.

View on arXiv
Comments on this paper