Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing

We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets for the same learning model . Our objective is to minimize the cumulative deviation of the generated parameters across all iterations from the specialized parameters obtained for each dataset, while respecting the loss function for the model produced by the algorithm upon halting. We only allow for continual communication between each of the specialized models (nodes/agents) and the central planner (server), at each iteration (round). For the case where the model is a finite-rank kernel regression, we derive explicit updates for the regret-optimal algorithm. By leveraging symmetries within the regret-optimal algorithm, we further develop a nearly regret-optimal heuristic that runs with fewer elementary operations, where is the dimension of the parameter space. Additionally, we investigate the adversarial robustness of the regret-optimal algorithm showing that an adversary which perturbs training pairs by at-most , across all training sets, cannot reduce the regret-optimal algorithm's regret by more than , where is the aggregate number of training pairs. To validate our theoretical findings, we conduct numerical experiments in the context of American option pricing, utilizing a randomly generated finite-rank kernel.
View on arXiv