In this talk we consider the problem of super-resolving the line spectrum of a multisinusoidal signal from a finite number of samples, some of which may be completely corrupted. Measurements of this form can be modeled as an additive mixture of a sinusoidal and a sparse component. We propose to demix the two components and super-resolve the spectrum of the multisinusoidal signal by solving a convex program. Our main theoretical result is that-- up to logarithmic factors-- this approach is guaranteed to be successful with high probability for a number of spectral lines that is linear in the number of measurements, even if a constant fraction of the data are outliers. We show that the method can be implemented via semidefinite programming and explain how to adapt it in the presence of dense perturbations, as well as exploring its connection to atomic-norm denoising. In addition, we propose a fast greedy demixing method which provides good empirical results when coupled with a local nonconvex-optimization step.
Carlos Fernandez-Granda is an Assistant Professor of Mathematics and Data Science at the Center for Data Science and the Courant Institute of Mathematical Sciences at NYU. His research focuses on developing and analyzing optimization-based methods to tackle inverse problems that arise in applications such as neuroscience, computer vision and medical imaging. Before joining NYU, he completed his PhD under the supervision of Emmanuel Candes at Stanford University and then spent a year at Google, where he worked on techniques to process neural data.