Towards a Deeper Understanding of Adversarial Losses under a
Discriminative Adversarial Network Setting
- AAML
Recent work has proposed various adversarial loss functions for training either generative or discriminative models. Yet, it remains unclear what certain types of functions are valid adversarial losses, and how these loss functions perform against one another. In this paper, we aim to gain a deeper understanding of adversarial losses by decoupling the effects of their component functions and regularization terms. We first derive in theory some necessary and sufficient conditions of the component functions such that the adversarial loss is a divergence-like measure between the data and the model distributions. In order to systematically compare different adversarial losses, we then propose a new, simple comparative framework, dubbed DANTest, based on discriminative adversarial networks (DANs). With this framework, we evaluate an extensive set of adversarial losses by combining different component functions and regularization approaches. Our theoretical and empirical results can together serve as a reference for choosing or designing adversarial training objectives in future research.
View on arXiv