Graduate Seminar: Quoc Tran-Dinh
Statistics & Operations Research Dept., UNC-CH
Efficient Stochastic Gradient-Based Algorithms with Biased Variance-Reduced Estimators
In this talk, we discuss some recent progress in stochastic gradient-based methods using biased variance-reduced estimators to approximate a stationary point or a KKT point of non-convex problems such as stochastic non-convex optimization, stochastic compositional optimization, and stochastic minimax problems. More specifically, we introduce a new class of hybrid biased variance-reduced estimators that combines the well-known SARAH (Nguyen et al (2017)) and the classical SGD estimators to form a new one. We investigate the properties of this estimator class and draw some connections with existing ones. Next, we develop a new stochastic gradient-based algorithm to solve a class of composite nonconvex optimization problems that covers both finite-sum and expectation settings. Unlike several existing methods, our algorithm has only a single loop without taking snapshots. It can achieve the best-known oracle complexity bounds using both constant and adaptive learning rates. We also discuss many variants of our algorithms. Then, we demonstrate the flexibility of our estimators by applying them to approximate a stationary point of non-convex compositional optimization problems as well as a KKT point of minimax problems. In all cases, our methods have a single loop and achieve a state-of-the-art oracle complexity. Finally, we illustrate the proposed algorithms on several numerical examples using available and common datasets.
This talk is based on the collaboration with D. Liu (UNC), L. Nguyen (IBM), N. Pham (UNC), and D. Phan (IBM).