IDeAS Seminar: Nonconvex Recovery of Low-Complexity Models

IDeAS
Feb 8, 2017
2:30 pm
Fine Hall 214

Nonconvex optimization plays important role in wide range of areas of science and engineering — from learning feature representations for visual classification, to reconstructing images in biology, medicine and astronomy, to disentangling spikes from multiple neurons. The worst case theory for nonconvex optimization is dismal: in general, even guaranteeing a local minimum is NP hard. However, in these and other applications, very simple iterative methods such as gradient descent often perform surprisingly well. 

In this talk, I will discuss examples of nonconvex optimization problems that can be solved to global optimality using simple iterative methods, which succeed independent of initialization. These include variants of the sparse dictionary learning problem, image recovery from certain types of phaseless measurements, and variants of the sparse blind deconvolution problem. These problems possess a characteristic structure, in which (i) all local minima are global, and (ii) the energy landscape does not have any “flat” saddle points. For each of the aforementioned problems, this geometric structure allows us to obtain new types of performance guarantees. I will motivate these problems from applications in imaging and computer vision, and describe how this viewpoint leads to new approaches to analyzing electron microscopy data.

Joint work with Ju Sun (Stanford), Qing Qu (Columbia), Yuqian Zhang (Columbia), Yenson Lau (Columbia) Sky Cheung, (Columbia), Abhay Pasupathy (Columbia)

John Wright is an Associate Professor in the Electrical Engineering Department at Columbia University. He received his PhD in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2009, and was with Microsoft Research from 2009-2011. His research is in the area of high-dimensional data analysis. In particular, his recent research has focused on developing algorithms for robustly recovering structured signal representations from incomplete and corrupted observations, and applying them to practical problems in imaging and vision. His work has received a number of awards and honors, including the 2009 Lemelson-Illinois Prize for Innovation for his work on face recognition, the 2009 UIUC Martin Award for Excellence in Graduate Research, and a 2008-2010 Microsoft Research Fellowship, and the Best Paper Award from the Conference on Learning Theory (COLT) in 2012, the 2015 PAMI TC Young Researcher Award.