-
From Local Views to Global Embedding: Methods in Bottom-Up Manifold Learning
Abstract:
A bottom-up approach to manifold learning involves two key steps: construction of local views of the data and aligning them to produce a global embedding. In this talk,
- We introduce a method to obtain low-dimensional local views of high-dimensional data while accounting for the global geometry of the underlying data manifold. Our approach utilizes low-frequency global eigenvectors of the graph Laplacian to construct low-distortion local views.
- We present a framework for aligning the local views of a possibly closed/non-orientable data manifold to produce an embedding in its intrinsic dimension through tearing. Using a spectral coloring scheme along the tear, we visually recover the gluing instructions, revealing the manifold’s topology. We showcase application of this framework to various synthetic and real world datasets.
- Building on the practical effectiveness of Riemannian gradient descent (RGD) for aligning local views in the above framework—particularly its faster convergence—we establish noise stability and convergence guarantees for RGD in solving the alignment problem.
- To assess the quality of an embedding, we propose Lipschitz-type pointwise measures of global distortion. By bounding global distortion in terms of the distortion of the local views and the alignment error between them, we confirm the necessity of incorporating a repulsion term in manifold learning objectives to achieve low-distortion embeddings