View publication

We study the problem of private online learning, specifically, online prediction from experts (OPE) and online convex optimization (OCO). We propose a new transformation that transforms lazy online learning algorithms into private algorithms. We apply our transformation for differentially private OPE and OCO using existing lazy algorithms for these problems. Our final algorithms obtain regret which significantly improves the regret in the high privacy regime ε1\varepsilon \ll 1, obtaining Tlogd+T1/3log(d)/ε2/3\sqrt{T \log d} + T^{1/3} \log(d)/\varepsilon^{2/3} for DP-OPE and T+T1/3d/ε2/3\sqrt{T} + T^{1/3} \sqrt{d}/\varepsilon^{2/3} for DP-OCO. We also complement our results with a lower bound for DP-OPE, showing that these rates are optimal for a natural family of low-switching private algorithms.

Related readings and updates.

Private Online Prediction from Experts: Separations and Faster Rates

*= Equal Contributors Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of O(Tlog⁡d+log⁡d/ε)O(\sqrt{T \log d} + \log…
See paper details

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

*=Equal Contributors We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret O(ε−1log⁡1.5d)O \big( \varepsilon^{-1} \log^{1.5}{d} \big)O(ε−1log1.5d) where ddd is the number of experts. This…
See paper details