- This event has passed.
Ph.D. Defense: Nhan Huu Pham
15 Nov @ 1:30 pm - 3:30 pm
Ph.D. Defense: Nhan Huu Pham
15 Nov @ 1:30 pm – 3:30 pmNEW STOCHASTIC AND RANDOMIZED ALGORITHMS FOR NONCONVEX OPTIMIZATION IN MACHINE LEARNING
The dissertation consists of new stochastic first-order methods to solve nonconvex optimization models which cover many applications in machine learning. Firstly, we propose ProxSARAH, a new framework that uses the SARAH gradient estimator to develop new algorithms for solving the stochastic composite nonconvex problems. Our analysis shows that our methods can achieve the best-known convergence results and even match the lower bound complexity. We also provide extensive numerical experiments to illustrate the advantages of our methods compared to existing ones. Next, we study the problem of policy gradient in reinforcement learning and propose a new proximal hybrid stochastic policy gradient algorithm, called ProxHSPGA, using a new policy gradient estimator built from two different estimators. The new algorithm is able to solve the general composite policy optimization problem which includes regularization or constraint on the policy parameters. It also achieves the best-known sample complexity compared to existing methods. Our experiments on both discrete and continuous control tasks show that our proposed methods indeed are advantageous over existing ones.
Lastly, we focus on a new machine learning paradigm, called federated learning (FL), where multiple agents collaboratively train a machine learning model in a distributed fashion. We propose two new algorithms, FedDR and asyncFedDR, for solving the nonconvex composite optimization problem which can handle convex regularizers in FL. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. Our convergence analyses show that the new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods compared to existing ones on various synthetic and real datasets.