Private List Learnability vs. Online List Learnability

This work explores the connection between differential privacy (DP) and online learning in the context of PAC list learning. In this setting, a -list learner outputs a list of potential predictions for an instance and incurs a loss if the true label of is not included in the list. A basic result in the multiclass PAC framework with a finite number of labels states that private learnability is equivalent to online learnability [Alon, Livni, Malliaris, and Moran (2019); Bun, Livni, and Moran (2020); Jung, Kim, and Tewari (2020)]. Perhaps surprisingly, we show that this equivalence does not hold in the context of list learning. Specifically, we prove that, unlike in the multiclass setting, a finite -Littlestone dimensio--a variant of the classical Littlestone dimension that characterizes online -list learnability--is not a sufficient condition for DP -list learnability. However, similar to the multiclass case, we prove that it remains a necessary condition.To demonstrate where the equivalence breaks down, we provide an example showing that the class of monotone functions with labels over is online -list learnable, but not DP -list learnable. This leads us to introduce a new combinatorial dimension, the \emph{-monotone dimension}, which serves as a generalization of the threshold dimension. Unlike the multiclass setting, where the Littlestone and threshold dimensions are finite together, for , the -Littlestone and -monotone dimensions do not exhibit this relationship. We prove that a finite -monotone dimension is another necessary condition for DP -list learnability, alongside finite -Littlestone dimension. Whether the finiteness of both dimensions implies private -list learnability remains an open question.
View on arXiv@article{hanneke2025_2506.12856, title={ Private List Learnability vs. Online List Learnability }, author={ Steve Hanneke and Shay Moran and Hilla Schefler and Iska Tsubari }, journal={arXiv preprint arXiv:2506.12856}, year={ 2025 } }