View publication

*=Equal Contributors

We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret O(ε1log1.5d)O \big( \varepsilon^{-1} \log^{1.5}{d} \big) where dd is the number of experts. This significantly improves over the best existing regret bounds for the DP non-realizable setting which are O(ε1min{d,T1/3logd})O \big( \varepsilon^{-1} \min\big\{d, T^{1/3}\log d\big\} \big). We also develop an adaptive algorithm for the small-loss setting with regret O(Llogd+ε1log1.5d)O(L \log d + \varepsilon^{-1} \log^{1.5}{d}) where LL is the total loss of the best expert. Additionally, we consider DP online convex optimization in the realizable setting and propose an algorithm with near-optimal regret O(ε1d1.5)O \big(\varepsilon^{-1} d^{1.5} \big), as well as an algorithm for the smooth case with regret O(ε2/3(dT)1/3)O \big( \varepsilon^{-2/3} (dT)^{1/3} \big), both significantly improving over existing bounds in the non-realizable regime.

Related readings and updates.

Private Online Prediction from Experts: Separations and Faster Rates

*= Equal Contributors Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of O(Tlog⁡d+log⁡d/ε)O(\sqrt{T \log d} + \log…
See paper details

Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry

Stochastic convex optimization over an ℓ1ℓ_1ℓ1​-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any (ε,δ)(\varepsilon, \delta)(ε,δ)-differentially private optimizer is log⁡(d)/n  +\sqrt{\log(d)/n}\; +log(d)/n​+ d/εn.\sqrt{d}/\varepsilon n.d​/εn. The upper bound is based on…
See paper details