22
3

Statistical learning on measures: an application to persistence diagrams

Abstract

We consider a binary supervised learning classification problem where instead of having data in a finite-dimensional Euclidean space, we observe measures on a compact space X\mathcal{X}. Formally, we observe data DN=(μ1,Y1),,(μN,YN)D_N = (\mu_1, Y_1), \ldots, (\mu_N, Y_N) where μi\mu_i is a measure on X\mathcal{X} and YiY_i is a label in {0,1}\{0, 1\}. Given a set F\mathcal{F} of base-classifiers on X\mathcal{X}, we build corresponding classifiers in the space of measures. We provide upper and lower bounds on the Rademacher complexity of this new class of classifiers that can be expressed simply in terms of corresponding quantities for the class F\mathcal{F}. If the measures μi\mu_i are uniform over a finite set, this classification task boils down to a multi-instance learning problem. However, our approach allows more flexibility and diversity in the input data we can deal with. While such a framework has many possible applications, this work strongly emphasizes on classifying data via topological descriptors called persistence diagrams. These objects are discrete measures on R2\mathbb{R}^2, where the coordinates of each point correspond to the range of scales at which a topological feature exists. We will present several classifiers on measures and show how they can heuristically and theoretically enable a good classification performance in various settings in the case of persistence diagrams.

View on arXiv
Comments on this paper