- This event has passed.
STOR Colloquium: Michael O’Neill, Lehigh University
18 Jan @ 3:30 pm - 4:30 pm
STOR Colloquium: Michael O’Neill, Lehigh University
18 Jan @ 3:30 pm – 4:30 pmAlgorithms for Large-Scale Nonconvex Optimization with Complexity Guarantees
Designing algorithms with optimal complexity guarantees has a long history of producing major advances in optimization. These important breakthroughs include Nesterov’s accelerated gradient algorithm for first order convex optimization and interior point methods for linear programming. Recent interest in nonconvex optimization, partially due to the rise of nonconvex machine learning models, has lead to the proposal of numerous nonconvex optimization methods with optimal worst-case complexity guarantees. This talk will present a few of these methods.
In the first part of this talk, algorithms based on the Newton-Conjugate Gradient (CG) algorithm are described. Slight modifications are made to the classic Newton-CG algorithm to enable optimal worst-case complexity results to be proven. In addition, an interior point Newton-CG method for bound constrained optimization with a worst case complexity guarantee is also presented.
In the second half of this talk, Sequential Quadratic Programming (SQP) algorithms for stochastic optimization with deterministic nonlinear equality constraints are proposed. This framework enables machine learning practitioners to encode complicated constraints into the training process to produce superior results in application areas such as physically informed neural networks and algorithmic fairness. The first worst-case complexity result for a fully stochastic SQP algorithm is presented.