View publication

Online commercial app marketplaces serve millions of apps to billions of users in an efficient manner. Bandit optimization algorithms are used to ensure that the recommendations are relevant, and converge to the best performing content over time. However, directly applying bandits to real-world systems, where the catalog of items is dynamic and continuously refreshed, is not straightforward. One of the challenges we face is the existence of several competing content surfacing components, a phenomenon not unusual in large-scale recommender systems. This often leads to challenging scenarios, where improving the recommendations in one component can lead to performance degradation of another, i.e., “cannibalization". To address this problem we introduce an efficient two-layer bandit approach which is contextualized to user cohorts of similar taste. We mitigate cannibalization at runtime within a single multi-intent content surfacing platform by formalizing relevant offline evaluation metrics, and by involving the cross-component interactions in the bandit rewards. The user engagement in our proposed system has more than doubled as measured by online A/B testings.

Related readings and updates.

When Can Accessibility Help? An Exploration of Accessibility Feature Recommendation on Mobile Devices

Numerous accessibility features have been developed and included in consumer operating systems to provide people with a variety of disabilities additional ways to access computing devices. Unfortunately, many users, especially older adults who are more likely to experience ability changes, are not aware of these features or do not know which combination to use. In this paper, we first quantify this problem via a survey with 100 participants…
See paper details

Unsupervised Style and Content Separation by Minimizing Mutual Information for Speech Synthesis

We present a method to generate speech from input text and a style vector that is extracted from a reference speech signal in an unsupervised manner, i.e., no style annotation, such as speaker information, is required. Existing unsupervised methods, during training, generate speech by computing style from the corresponding ground truth sample and use a decoder to combine the style vector with the input text. Training the model in such a way leaks…
See paper details