IDeAS

Spring 2017

IDeAS Seminar: Nonconvex Recovery of Low-Complexity Models

Speaker: 
John Wright, Columbia University
Date: 
Feb 8 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

Nonconvex optimization plays important role in wide range of areas of science and engineering — from learning feature representations for visual classification, to reconstructing images in biology, medicine and astronomy, to disentangling spikes from multiple neurons. The worst case theory for nonconvex optimization is dismal: in general, even guaranteeing a local minimum is NP hard. However, in these and other applications, very simple iterative methods such as gradient descent often perform surprisingly well. 

In this talk, I will discuss examples of nonconvex optimization problems that can be solved to global optimality using simple iterative methods, which succeed independent of initialization. These include variants of the sparse dictionary learning problem, image recovery from certain types of phaseless measurements, and variants of the sparse blind deconvolution problem. These problems possess a characteristic structure, in which (i) all local minima are global, and (ii) the energy landscape does not have any “flat” saddle points. For each of the aforementioned problems, this geometric structure allows us to obtain new types of performance guarantees. I will motivate these problems from applications in imaging and computer vision, and describe how this viewpoint leads to new approaches to analyzing electron microscopy data.

Joint work with Ju Sun (Stanford), Qing Qu (Columbia), Yuqian Zhang (Columbia), Yenson Lau (Columbia) Sky Cheung, (Columbia), Abhay Pasupathy (Columbia)

John Wright is an Associate Professor in the Electrical Engineering Department at Columbia University. He received his PhD in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2009, and was with Microsoft Research from 2009-2011. His research is in the area of high-dimensional data analysis. In particular, his recent research has focused on developing algorithms for robustly recovering structured signal representations from incomplete and corrupted observations, and applying them to practical problems in imaging and vision. His work has received a number of awards and honors, including the 2009 Lemelson-Illinois Prize for Innovation for his work on face recognition, the 2009 UIUC Martin Award for Excellence in Graduate Research, and a 2008-2010 Microsoft Research Fellowship, and the Best Paper Award from the Conference on Learning Theory (COLT) in 2012, the 2015 PAMI TC Young Researcher Award.

IDeAS Seminar: Synchronization over Cartan motion groups via contraction

Speaker: 
Nir Sharon, Princeton University
Date: 
Feb 15 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

The mathematical problem of group synchronization deals with the question of how to estimate unknown set of group elements from a set of their mutual relations. This problem appears as an important step in solving many real world problem such as Structure from Motion (SfM) in vision or pose graph optimization (estimating positions and orientations of mobile robots) in robotics. In this talk, we present a novel solution for synchronization over the class of the non-compact Cartan motion groups, which includes the special important case of rigid motions. Our method is based upon the idea of group contraction, which is a compactification process origin in relativistic mechanics. We show the construction of such a solution for synchronization, analyze some of its theoretical aspects, and illustrate numerically its advantages compared to some of the current state-of-the-art synchronization methods on both synthetic and real data.

IDeAS Seminar: Rapid solution of the cryo-EM reconstruction problem by frequency marching

Speaker: 
Marina Spivak, Simons Center for Data Analysis
Date: 
Feb 22 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

Determining the three-dimensional structure of proteins and protein complexes at atomic resolution is a fundamental task in structural biology. Over the last decade, remarkable progress has been made using ``single particle" cryo-electron microscopy (cryo-EM) for this purpose. The reconstruction of a high-resolution image from cryo-EM data is typically formulated as a nonlinear, non-convex optimization problem for hundreds of thousands of unknown parameters. 

This leads to a very CPU-intensive task---limiting both the number of particle images which can be processed and the number of independent reconstructions which can be carried out for the purpose of statistical validation. Here, we propose a deterministic method for high-resolution reconstruction given a very low resolution initial guess, that requires a predictable and relatively modest amount of computational effort.

Marina Spivak received her B.A. from UC Berkeley and her Ph.D. in Computer Science from NYU. Her research interests include machine learning and questions in bioinformatics. Her postdoctoral research at the University of Washington focused on application of machine learning techniques to shotgun mass spectrometry protein sequencing. She joined the Simons Foundation Center for Computational Biology in 2013, where her current topic of research is development of algorithms for cryo electron microscopy data analysis.

IDeAS Seminar: A Taste of Julia for Scientific Computing

Speaker: 
Stefan Karpinski, Julia Computing
Date: 
Mar 1 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing:

1. High-level dynamic programs have to be slow.

2. One must prototype in one language and then rewrite in another language for speed or deployment.

3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts.

This talk will be an introduction to the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.

Stefan Karpinski is one of the co-creators of the Julia programming language and a co-founder of Julia Computing, Inc. He holds a part-time position as a Research Engineer at NYU as part of the Moore-Sloan Data Science Initiative and has previously worked as a software engineer and data scientist at AkamaiCitrix, and Etsy.

IDeAS Seminar: Structure-Aware Dictionary Learning for Graph Signals

Speaker: 
Yael Yankelevsky, Technion
Date: 
Mar 3 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

Dictionary Learning techniques aim to find sparse signal representations that capture prominent characteristics in the given data. For signals residing on non-Euclidean topologies, represented by weighted graphs, an additional challenge is incorporating the underlying geometric structure of the data domain into the learning process. We introduce an approach that aims to infer and preserve the local intrinsic geometry in both dimensions of the data. Combining ideas from spectral graph theory, manifold learning and sparse representations, our proposed algorithm simultaneously takes into account the underlying graph topology as well as the data manifold structure. The efficiency of this approach is demonstrated on a variety of applications, including sensor network data completion and enhancement, image structure inference, and challenging multi-label classification problems.

Yael is a PhD student in the Computer Science Department at the Technion, working under the supervision of prof. Michael Elad. Her research interests include signal and image processing, machine learning, sparse representations, inverse problems and graphical models.

IDeAS Seminar: The effectiveness of nonconvex optimization in two problems

Speaker: 
Yuxin Chen, Princeton University
Date: 
Mar 8 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

Many information and data science applications involve computation of the maximum likelihood estimates (MLE) over a high-dimensional space. Unfortunately, most MLEs are solutions to non-convex problems, which are in general intractable. 

Fortunately, some of the problems are not as hard as they may seem, and can be solved by optimizing the nonconvex objectives directly.  This talk illustrates this observation through two problems: (1) solving a random quadratic system of equations, which spans many applications ranging from the century-old phase retrieval problem to various latent-variable models in machine learning;  (2) recovering a collection of discrete variables from noisy pairwise differences, which arises when one wishes to jointly align multiple images or to retrieve the genome phases from paired sequencing reads. In both cases, the proposed solutions can be accomplished within linear time, while achieving a statistical accuracy that is nearly un-improvable.

This is based on joint work with Emmanuel Candes.

Yuxin Chen is currently an assistant professor in the Department of Electrical Engineering at Princeton University. Prior to joining Princeton, he was a postdoctoral scholar in the Department of Statistics at Stanford University, and he completed his Ph.D. in Electrical Engineering at Stanford University. His research interests include high-dimensional structured estimation, convex and nonconvex optimization, statistical learning, and information theory.

IDeAS Seminar: Applications of Noncommutative Harmonic Analysis in Engineering and the Applied Sciences

Speaker: 
Gregory Chirikjian, Johns Hopkins University
Date: 
Mar 29 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

This talk will review the invariants, representations, discrete subgroups, and Fourier analysis for the group of rigid-body motions of three-dimensional Euclidean space. It will be shown how this theory has been applied to various problems in engineering and science including robotics, hand-eye calibration in computer vision, X-ray crystallography, and cryo-electron microscopy.

Gregory S. Chirikjian received undergraduate degrees from Johns Hopkins University in 1988, and the Ph.D. degree from the California Institute of Technology, Pasadena, in 1992. Since 1992, he has been on the faculty of the Department of Mechanical Engineering, Johns Hopkins University, where he has been a full professor since 2001. From 2004-2007 he served as department chair. His research interests include robotics, applications of group theory in a variety of engineering  disciplines, and the mechanics of biological macromolecules. He is a 1993 National Science Foundation Young Investigator, a 1994 Presidential Faculty Fellow, and a 1996 recipient of the ASME Pi Tau Sigma Gold Medal. In 2008 he became a Fellow of the ASME, and in 2010 he became a Fellow of the IEEE. He is the author of more than 200 journal and conference papers and primary author on three books: Engineering Applications of Noncommutative Harmonic Analysis (2001) and Stochastic Models, Information Theory, and Lie Groups, Vols. 1+2. (2009,2011). In 2016 and expanded edition of his 2001 book came out as a Dover book under the new title: Harmonic Analysis for Engineers and Applied Scientists.

IDeAS Seminar: Matrix Optimal Mass Transport: a Quantum Mechanical Approach

Speaker: 
Yongxin Chen, Research Fellow in the Department of Medical Physics in Memorial Sloan Kettering Cancer Center.
Date: 
Mar 30 2017 - 2:30pm
Event type: 
IDeAS
Room: 
McDonnell Hall, Room 102A
Abstract: 

Optimal mass transport (OMT) is a rich area of research with applications to numerous disciplines including econometrics, fluid dynamics, statistical physics, shape optimization, expert systems, and meteorology. The problem was originally formulated on the space of scalar probability densities. In the present talk, we describe a non-commutative generalization of OMT, to the space of Hermitian matrices with trace one, and to the space of matrix-valued probability densities. Our approach follows a fluid dynamics formulation of OMT, and utilizes certain results from the quantum mechanics of open systems, in particular the Lindblad equation. The  non-commutative OMT introduces a natural distance metric for matrix-valued distributions. Our original motivation aimed at developing tools for multivariate time series modeling and matrix-valued power spectral analysis. However, the emergent theory turned out to have immediate applications in diffusion tensor imaging (DTI) where images are now tensor fields representing orientation and shape of brain white-matter. Thus, the framework we have developed allows us to compare, interpolate and fuse DTI images in a disciplined manner and, thereby, may lead to high resolution advances that in turn promise improved in vivo imaging of important brain structures. In addition, our formulation allows determining the gradient flow for the quantum entropy relative to the matricial Wasserstein metric induced by the non-commutative OMT. This may have implications to some key issues in quantum control and information theory.

Yongxin Chen received his BSc in Mechanical Engineering from Shanghai Jiao Tong University, China, in 2011. He obtained his Ph.D. in Mechanical Engineering from University of Minnesota in 2016 under the supervision of Tryphon Georgiou, with a Ph.D. minor in Mathematics. He is now a postdoctoral researcher in the Department of Medical Physics at Memorial Sloan Kettering Cancer Center. He is interested in the application of mathematics in engineering, physics, economics and biology. His current research focuses on control theory, stochastic processes and optimal mass transport theory. 

IDeAS Seminar: Demixing Sines and Spikes: Spectral Super-resolution in the Presence of Outliers

Speaker: 
Carlos Fernandez-Granda, NYU
Date: 
Apr 5 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

In this talk we consider the problem of super-resolving the line spectrum of a multisinusoidal signal from a finite number of samples, some of which may be completely corrupted. Measurements of this form can be modeled as an additive mixture of a sinusoidal and a sparse component. We propose to demix the two components and super-resolve the spectrum of the multisinusoidal signal by solving a convex program. Our main theoretical result is that-- up to logarithmic factors-- this approach is guaranteed to be successful with high probability for a number of spectral lines that is linear in the number of measurements, even if a constant fraction of the data are outliers. We show that the method can be implemented via semidefinite programming and explain how to adapt it in the presence of dense perturbations, as well as exploring its connection to atomic-norm denoising. In addition, we propose a fast greedy demixing method which provides good empirical results when coupled with a local nonconvex-optimization step.

Carlos Fernandez-Granda is an Assistant Professor of Mathematics and Data Science at the Center for Data Science and the Courant Institute of Mathematical Sciences at NYU. His research focuses on developing and analyzing optimization-based methods to tackle inverse problems that arise in applications such as neuroscience, computer vision and medical imaging. Before joining NYU, he completed his PhD under the supervision of Emmanuel Candes at Stanford University and then spent a year at Google, where he worked on techniques to process neural data. 

IDeAS Seminar: A random walk on the upper triangular matrices

Speaker: 
Evita Nestoridi, Princeton University
Date: 
Apr 12 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

We study the following random walk on the group of n n upper triangular matrices with coecients in Z=pZ and ones along the diagonal. Starting at the identity, at each step we choose a row at random and either add it to or subtract it from the row immediately above. The mixing time of this walk is conjectured to be asymptotically n2p2. While much has been proven in this direction by a number of authors, the full conjecture remains open. We sharpen the techniques introduced by Arias-Castro, Diaconis, and Stanley to show that the dependence on p of the mixing time is p2. To prove this result, we use super-character theory and comparison theory to bound the eigenvalues of this random walk.

IDeAS Seminar: Estimating the Covariance Spectrum

Speaker: 
Weihao Kong, Stanford University
Date: 
Apr 18 2017 - 2:30pm
Event type: 
IDeAS
Room: 
McDonnell Hall, Room 102A
Abstract: 

Suppose one wishes to accurately recover the set of eigenvalues of the covariance matrix of a distribution, given access to samples from the distribution. To what extent can this set be accurately approximated given an amount of data that is sublinear in the dimension of the space? The naive empirical estimate has poor performance when the number of samples is linear or sub-linear in the dimension of the data. In the "large sample, large dimension" setting, we proposed an efficient and information theoretically near-optimal algorithm to learn the moments of the covariance spectral distribution. Further we show that given the first k moments of a distribution, we can pin down the distribution in Earthmover distance up to an error of O(1/k). These two results combined allow us to efficiently and accurately learn the spectrum of the underlying covariance matrix, yielding a consistent spectrum estimator even with sub-linear number of samples.

Weihao is a PhD student in the Computer Science Department at Stanford. His current research focuses on applying ideas and insights from probability theory and random matrix theory to yield algorithms for accurately inferring information about large/complex distribution, given surprisingly little data.

IDeAS Seminar: Solution of the biharmonic equation on regions with corners

Speaker: 
Manas Rachh, Yale University
Date: 
Apr 19 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

The design of microfluidic devices involves the study of fluid dynamics at small length scales. The behavior of such systems is governed by the Stokes equations (biharmonic equation with gradient boundary conditions), due to the dominant influence of lubrication effects. Typically, micro-channels in microfluidic devices manufactured using planar lithographic techniques have nearly rectangular cross sections and thus tend to have many sharp corners. In such cases, reformulating the governing equation as a boundary integral equation is a natural approach, since this reduces the dimensionality of the problem (discretizing the boundaries alone) and permits high order accuracy in complicated geometries. However, whenever the computational domain has corners, the use of integral equation methods requires particular care as the resulting solutions develop singularities. It is conjectured by Osher (and proven in certain special cases) that the Green’s function for the biharmonic equation on regions with corners has infinitely many oscillations in the vicinity of each corner. In this talk, we show that the solutions of one of the integral equations equivalent to the biharmonic equation can be represented by a series of elementary functions which oscillate with a frequency proportional to the logarithm of the distance from the corner. This representation is used to construct accurate and efficient Nyström discretizations for solving the resulting integral equation with a moderate number of unknowns. We illustrate the performance of our method with several numerical examples.

Manas Rachh is an Assistant Professor in the Applied Mathematics Program at Yale University. His current research interests are developing robust, high precision, and fast algorithms for problems arising in electrostatics, acoustics, elastostatics, and viscous flow. Before joining Yale, he completed his Ph.D. from Courant Institute under the supervision of Leslie Greengard.

IDeAS Seminar: Minimax-optimal semi-supervised regression on unknown manifolds

Speaker: 
Amit Moscovich, Weizmann Institute of Science
Date: 
Apr 25 2017 - 2:30pm
Event type: 
IDeAS
Room: 
McDonnell Hall, Room 102A
Abstract: 

In recent years, many semi-supervised regression and classification methods have been proposed. These methods demonstrated empirical success on some data sets, whereas on others the unlabeled data did not appear to help.

To analyze semi-supervised learning theoretically, it is often assumed that the data points lie on a low-dimensional manifold. Under this assumption [1] and [2] have shown that classical nonparametric regression methods, using only the labeled data, can achieve optimal rates of convergence. This implies that asymptotically, as the number of labeled points tends to infinity, unlabeled data does not help. However, typical semi-supervised scenarios involve few labeled points, and plenty of unlabeled ones.

In this work ([3]) we clarify the potential benefits of unlabeled data under the manifold assumption, given a fixed amount of labeled points. Specifically, we prove that for a Lipschitz function on a manifold, a simple semi-supervised regression method based on geodesic k-nearest-neighbors achieves the finite-sample minimax bound on the mean squared error, provided that sufficiently many unlabeled points are available. Furthermore, we show that this approach is computationally efficient, requiring only O(k N log N) operations to estimate the regression function for all N labeled and unlabeled points. We illustrate this approach on two datasets with a manifold structure: indoor localization using WiFi fingerprints and facial pose estimation. In both cases, the proposed method is more accurate and much faster than the popular Laplacian eigenvector regressor [4].

The talk should be accessible to anyone with a general background in statistics and machine learning. Specifically, no knowledge of manifold geometry or minimax theory is assumed.

[1] Bickel, P. J. and Li, B. "Local polynomial regression on unknown manifolds". Tomography, Networks and Beyond (2007).

[2] Lafferty, J. and Wasserman, L. "Statistical analysis of semi-supervised regression". NIPS (2007).

[3] Moscovich, A. Jaffe, A. and Nadler, B. "Minimax-optimal semi-supervised regression on unknown manifolds". Accepted to AISTATS (2017). https://arxiv.org/abs/1611.02221

[4] Belkin, M. and Niyogi, P. "Semi-supervised learning on riemannian manifolds". Machine learning (2004).

________________

Amit Moscovich recently finished a PhD at the CS and applied mathematics department of the Weizmann institute of science under the supervision of Boaz Nadler. His research is focused on finding computationally and statistically efficient methods of inference which adapt to certain phenomena in the input space.

IDeAS Seminar: Sketchy decisions: Low-rank matrix optimization with optimal storage

Speaker: 
Joel Tropp, Caltech
Date: 
Apr 26 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

Convex matrix optimization problems with low-rank solutions play a fundamental role in signal processing, statistics, and related disciplines. These problems are difficult to solve because of the cost of maintaining the matrix decision variable, even though the low-rank solution has few degrees of freedom. This talk presents the first algorithm that provably solves these problems using optimal storage. The algorithm produces high-quality solutions to large problem instances that, previously, were intractable.

Joint with Volkan Cevher, Roarke Horstmeyer, Quoc Tran-Dinh, Madeleine Udell, and Alp Yurtsever.

Joel A. Tropp is Professor of Applied & Computational Mathematics at the California Institute of Technology. He earned the Ph.D. degree in Computational Applied Mathematics from the University of Texas at Austin in 2004. His research centers on signal processing, numerical analysis, and random matrix theory. Prof. Tropp won the 2008 Presidential Early Career Award for Scientists and Engineers. He received society best paper awards from SIAM in 2010, EUSIPCO in 2011, and IMA in 2015. He was also recognized as a Thomson Reuters Highly Cited Researcher in Computer Science in 2014, 2015, and 2016.

IDeAS Seminar: Algorithms for Reducing the Computational Burden of cryo-EM

Speaker: 
Marcus Brubaker, University of Toronto
Date: 
Apr 27 2017 - 2:30pm
Event type: 
IDeAS
Room: 
McDonnell Hall, Room 102A
Abstract: 

Computational expense has long been a challenge for 3D structure determination in electron cryomicroscopy (cryo-EM).  This problem has become more pressing as resolutions have increased and the desire to separate conformational computational variations has required ever-larger datasets.  This talk presents two new algorithmic developments which, when coupled with modern GPU hardware, dramatically reduces the computational requirements for cryo-EM.  Given low-resolution starting structures, high resolution structures can now be obtained in as little as 10s of minutes on modest desktop workstations.  Further, despite the severe non-convexity of the objective function, these new refinement algorithms have shown themselves to be robust to local minima, enabling ab initio structure determination and 3D classification.  Together, these algorithms can produce ab initio high resolution structures in an hour or two.  This talk will describe the underlying algorithmic advances (stochastic optimization and branch-and-bound search) along with results from their implementation in the recently released cryoSPARC software package.

Dr. Marcus Brubaker is an Assistant Professor of Computer Science at York University in Toronto, Canada.  He received his Ph.D. in Computer Science from the University of Toronto in 2011 and, before joining York University, worked as a postdoctoral fellow at the Toyota Technological Institute at Chicago.  His research interests span computer vision, machine learning and statistics and he has worked on a range of problems including video-based human motion estimation, physical models of human motion, Bayesian inference, ballistic forensics, electron cryomicroscopy and autonomous driving.  He also consults with industry on machine learning and computer vision related projects and has been involved in a number of startups, including as the co-founder of Structura Biotechnology Inc.

IDeAS Seminar

Speaker: 
Afonso Bandeira, NYU
Date: 
May 1 2017 - 2:30pm
Event type: 
IDeAS
Room: 
110 Fine Hall
Abstract: 

TBA

IDeAS Seminar

Speaker: 
Joe Zhong, Princeton University
Date: 
May 10 2017 - 2:30pm
Event type: 
IDeAS
Room: 
224 Fine Hall
Abstract: 

TBA