Sparser, better, faster, stronger: Emerging approaches in the theory and practice of randomized dimensionality reduction

IDeAS
Feb 26, 2026
2 - 3 pm
111 JADWIN HALL

Abstract:

For approaching thirty years, randomized dimensionality reduction techniques have been a core tool in the theory and practice of computation. But despite this robust history, basic questions remain hotly debated: Which dimensionality reduction map should be used? How can the map be adapted to structure in the problem? What is the right theoretical approach to analyzing randomized dimensionality reduction? This talk presents a new approach to the theory of randomized dimensionality reduction under which the core feature of a good dimensionality map is injectivity: It is fine if the map stretches things out a bit as long as it does not annihilate any element of the data space. This new approach yields new analysis of sparse and tensor-structured dimensionality reduction maps that comes closer to describing how these maps behave in practice. This talk is designed for a general audience and assumes no prior familiarity with randomized dimensionality reduction.