Data-Driven Perspectives on First-Order Methods for Convex Optimization

PACM Colloquium
Apr 20, 2026
4:30 - 5:30 pm
214 FINE HALL

Abstract

First-order methods are widely used in large-scale convex optimization, yet providing sharp guarantees on their convergence behavior remains a key challenge. In many practical settings, the same optimization problem is solved repeatedly with varying parameters, naturally modeled as draws from an unknown distribution. In this talk, I present two complementary approaches that use the observed convergence on a limited number of problem instances to improve both the analysis and the design of first-order methods. The first combines the performance estimation problem (PEP) with Wasserstein distributionally robust optimization into a convex semidefinite program that produces probabilistic convergence bounds, bridging worst-case and average-case analysis. The second uses PAC-Bayes theory to learn algorithm parameters, such as step-sizes and warm-starts, with provable generalization guarantees rooted in the convergence properties of the underlying operators. Together, these lines of work connect classical tools from convex analysis and operator theory with ideas from statistical learning, offering both tighter performance certificates and a principled approach to algorithm tuning.