- Quicker Single-loop Algorithms for Minimax Optimization with out Sturdy Concavity
Authors: Junchi Yang, Antonio Orvieto, Aurelien Lucchi, Niao He
Summary: Gradient descent ascent (GDA), the best single-loop algorithm for nonconvex minimax optimization, is broadly utilized in sensible functions similar to generative adversarial networks (GANs) and adversarial coaching. Albeit its fascinating simplicity, latest work reveals inferior convergence charges of GDA in idea even assuming robust concavity of the target on one facet. This paper establishes new convergence outcomes for 2 different single-loop algorithms — alternating GDA and smoothed GDA — underneath the delicate assumption that the target satisfies the Polyak-Lojasiewicz (PL) situation about one variable. We show that, to search out an ε-stationary level, (i) alternating GDA and its stochastic variant (with out mini batch) respectively require O(κ2ε−2) and O(κ4ε−4) iterations, whereas (ii) smoothed GDA and its stochastic variant (with out mini batch) respectively require O(κε−2) and O(κ2ε−4) iterations. The latter vastly improves over the vanilla GDA and offers the hitherto greatest recognized complexity outcomes amongst single-loop algorithms underneath related settings. We additional showcase the empirical effectivity of those algorithms in coaching GANs and strong nonlinear regression