View publication

*= Equal Contributors

Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of O(Tlogd+logd/ε)O(\sqrt{T \log d} + \log d/\varepsilon) for the stochastic setting and O(Tlogd+T1/3logd/ε)O(\sqrt{T \log d} + T^{1/3} \log d/\varepsilon) for oblivious adversaries (where dd is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime dTd \ge T. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private O(T)O(\sqrt{T}) regret.

Related readings and updates.

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

*=Equal Contributors We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret O(ε−1log⁡1.5d)O \big( \varepsilon^{-1} \log^{1.5}{d} \big)O(ε−1log1.5d) where ddd is the number of experts. This…
See paper details

Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry

Stochastic convex optimization over an ℓ1ℓ_1ℓ1​-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any (ε,δ)(\varepsilon, \delta)(ε,δ)-differentially private optimizer is log⁡(d)/n  +\sqrt{\log(d)/n}\; +log(d)/n​+ d/εn.\sqrt{d}/\varepsilon n.d​/εn. The upper bound is based on…
See paper details