# The Optimal Approximation Factor in Density Estimation

Consider the following problem: given arbitrary densities q1,q2 and a sample-access to an unknown target density p, find which of the qi's is closer to p in the total variation distance.

A beautiful (and simple) result due to Yatracos (1985) shows that this problem is tractable in the following sense: there exists an algorithm that uses O(epsilon^{-2}) samples from p and outputs qi such that with high probability, TV(qi,p) <= 3*OPT + epsilon, where OPT= min{TV(q1,p),TV(q2,p)}. Moreover, this result extends to any finite class of densities: there exists an algorithm that outputs the best density in Q up to a multiplicative approximation factor of 3.

We complement and extend this result by showing that: (i) the factor 3 can not be improved if one restricts the algorithm to output a density from Q, and (ii) if one allows the algorithm to output arbitrary densities (e.g. a mixture of densities from Q), then the approximation factor can be reduced to 2, which is optimal. In particular this demonstrates an advantage of improper learning over proper in this setup.

Our algorithms rely on estimating carefully chosen surrogates metrics to the total variation, and our sample complexity bounds exploit techniques from Adaptive Data Analysis.

Joint work with Olivier Bousquet (Google brain) and Daniel Kane (UCSD).

*Shay Moran is a Postdoctoral fellow at the Computer Science Department in Princeton University. He graduated from the Technion in September ’16. During 2017 he was a postdoctoral fellow at UCSD and at the Simons Institute in Berkeley. During 2018 he was a member at the Institute for Advanced Study. In October ’19 he will join the Math Department at the Technion as an assistant Professor. Shay’s research interests revolves around mathematical problems that arise in computer science, with a focus on combinatorial problems related to machine learning.*