Faster Algorithms for User-Level Private Stochastic Convex Optimization
AuthorsAndrew Lowy, Daogao Liu, Hilal Asi
AuthorsAndrew Lowy, Daogao Liu, Hilal Asi
We study private stochastic convex optimization (SCO) under user-level differential privacy (DP) constraints. In this setting, there are users, each possessing data items, and we need to protect the privacy of each user's entire collection of data items. Existing algorithms for user-level DP SCO are impractical in many large-scale machine learning scenarios because: (i) they make restrictive assumptions on the smoothness parameter of the loss function and require the number of users to grow polynomially with the dimension of the parameter space; or (ii) they are prohibitively slow, requiring at least gradient computations for smooth losses and computations for non-smooth losses. To address these limitations, we provide novel user-level DP algorithms with state-of-the-art excess risk and runtime guarantees, without stringent assumptions. First, we develop a linear-time algorithm with state-of-the-art excess risk (for a non-trivial linear-time algorithm) under a mild smoothness assumption. Our second algorithm applies to arbitrary smooth losses and achieves optimal excess risk in gradient computations. Third, for non-smooth loss functions, we obtain optimal excess risk in gradient computations. Moreover, our algorithms do not require the number of users to grow polynomially with the dimension.