View publication

Motivated by the phenomenon of strategic agents gaming a recommendation system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms strategically misreport privately observed contexts to the learner. % under strategic context manipulation. We treat the algorithm design problem as one of \emph{mechanism design} under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that minimizes regret while simultaneously incentivizing the agents to be approximately truthful. We show that OptGTM achieves sublinear regret despite the agents being unrestricted in their ability to game the learning algorithm by misreporting contexts. We then also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between incentive-compatibility and regret minimization is shown to be unavoidable. More broadly, this work provides insight into the intersection of online learning and mechanism design.

Related readings and updates.

Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling

Most bandit algorithms assume that the reward variances or their upper bounds are known, and that they are the same for all arms. This naturally leads to suboptimal performance and higher regret due to variance overestimation. On the other hand, underestimated reward variances may lead to linear regret due to committing early to a suboptimal arm. This motivated prior works on variance-adaptive frequentist algorithms, which have strong…
See paper details

Dynamic Memory for Interpretable Sequential Optimization

Real-world applications of reinforcement learning for recommendation and experimentation faces a practical challenge: the relative reward of different bandit arms can evolve over the lifetime of the learning agent. To deal with these non-stationary cases, the agent must forget some historical knowledge, as it may no longer be relevant to minimise regret. We present a solution to handling non-stationarity that is suitable for deployment at scale…
See paper details