Learning piecewise Lipschitz functions in changing environments

Optimization in the presence of sharp (non-Lipschitz), unpredictable (w.r.t.\ time and amount) changes is a challenging and largely unexplored problem of great significance. We consider the class of piecewise Lipschitz functions, which is the most general setting considered in the literature for the problem, and arises naturally in various combinatorial algorithm selection problems where utility functions can have sharp discontinuities. The usual performance metric of `static' regret minimizes the gap between the payoff accumulated and that of the best fixed point for the entire duration, and thus fails to capture changing environments. Shifting regret is a useful alternative, which allows for up to environment shifts. In this work we provide an regret bound for -dispersed functions, where roughly quantifies the rate at which discontinuities appear in the utility functions in expectation (typically in problems of practical interest). We show this bound is optimal up to sub-logarithmic factors. We further show how to improve the bounds when selecting from a small pool of experts. We empirically demonstrate a key application of our algorithms to online clustering problems, with 15-40% relative gain over static regret based algorithms on popular benchmarks.
View on arXiv