Challenges of Adversarial Image Augmentations
AuthorsArno Blaas, Xavier Suau Cuadros, Jason Ramapuram, Luca Zappella, Nicholas Apostoloff
Challenges of Adversarial Image Augmentations
AuthorsArno Blaas, Xavier Suau Cuadros, Jason Ramapuram, Luca Zappella, Nicholas Apostoloff
Image augmentations applied during training are crucial for the generalization performance of image classifiers. Therefore, a large body of research has focused on finding the optimal augmentation policy for a given task. Yet, RandAugment [2], a simple random augmentation policy, has recently been shown to outperform existing sophisticated policies. Only Adversarial AutoAugment (AdvAA) [11], an approach based on the idea of adversarial training, has shown to be better than RandAugment. In this paper, we show that random augmentations are still competitive compared to an optimal adversarial approach, as well as to simple curricula, and conjecture that the success of AdvAA is due to the stochasticity of the policy controller network, which introduces a mild form of curriculum.
Policy Maps: Tools for Guiding the Unbounded Space of LLM Behaviors
November 3, 2025research area Data Science and Annotation, research area Human-Computer Interactionconference UIST
AI policy sets boundaries on acceptable behavior for AI models, but this is challenging in the context of large language models (LLMs): how do you ensure coverage over a vast behavior space? We introduce policy maps, an approach to AI policy design inspired by the practice of physical mapmaking. Instead of aiming for full coverage, policy maps aid effective navigation through intentional design choices about which aspects to capture and which to…
SapAugment: Learning A Sample Adaptive Policy for Data Augmentation
June 1, 2021research area Speech and Natural Language Processingconference ICASSP
Data augmentation methods usually apply the same augmentation (or a mix of them) to all the training samples. For example, to perturb data with noise, the noise is sampled from a Normal distribution with a fixed standard deviation, for all samples. We hypothesize that a hard sample with high training loss already provides strong training signal to update the model parameters and should be perturbed with mild or no augmentation. Perturbing a hard…