A fundamental question in the study of large networks is how we compare networks. Given two large networks, say FB and LinkedIn today, or alternatively, FB 5 year ago and FB today, how similar do we consider them to be? Another question is how to describe large networks in terms of a suitable class of (possibly non-parametric) models. Finally, how do we consistently estimate such a model from an observed graph?
In this talk, I describe how the theory of graph limits and graphons developed in the last ten years can help to address these questions, reviewing both the classical theory for dense graphs and newer results for sparse graphs including power law graphs. Time permitting, I will also discuss how consistent estimation can be done in a way which respects privacy, in the sense of differential privacy, i.e., so that the removal or addition of a vertex has only a small influence on the output.