Themes from numerical tensor calculus
Low-rank tensor methods compress high-dimensional arrays into more manageable sizes. This circumvents the curse of dimensionality where storage and computational costs scale exponentially with the data dimension. Over the past decade, this has enabled advances in signal processing, numerical linear algebra, machine learning, and many other fields.
This talk will discuss two different aspects of low-rank tensor methods. The first part presents a simple, black-box technique to introduce multiresolution low-rank tensor formats to more efficiently treat data with multiple characteristic length scales. This enables higher compression of data tensors into the format as compared to standard tensor compression algorithms. Moreover, decomposing a data tensor into different length scales allows for identifying and extracting salient features of the different length scales separately.
The second part of the talk switches focus and shows how low-rank tensor methods can be beneficial in large-scale signal processing tasks. We study phase retrieval problems with low-rank tensor structure that enable recovery of the unknown signal with a sub-linear number of measurements.
Oscar Mickelin is currently a 5th year PhD student in the mathematics department at MIT, also associated with the MIT Laboratory for Information and Decision Systems. His research studies low-rank tensor and matrix methods to compute with large-scale data. He previously obtained an MSc in mathematics from Caltech and an MSc in Engineering Physics from KTH.