Adversarial training can be used to learn models that are robust against perturbations. For linear models, it can be formulated as a convex optimization problem. Compared to methods proposed in the context of deep learning, leveraging the optimization structure allows significantly faster convergence rates. Still, the use of generic convex solvers can be inefficient for large-scale problems. Here, we propose tailored optimization algorithms for the adversarial training of linear models, which render large-scale regression and classification problems more tractable. For regression problems, we propose a family of solvers based on iterative ridge regression and, for classification, a family of solvers based on projected gradient descent. The methods are based on extended variable reformulations of the original problem. We illustrate their efficiency in numerical examples.
View on arXiv@article{ribeiro2025_2410.12677, title={ Efficient Optimization Algorithms for Linear Adversarial Training }, author={ Antônio H. RIbeiro and Thomas B. Schön and Dave Zahariah and Francis Bach }, journal={arXiv preprint arXiv:2410.12677}, year={ 2025 } }