View publication

Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known stumbling block for prior models. We propose a simple paradigm to test the presence of gender bias, building on but differing from WinoBias, a commonly used gender bias dataset which is likely to be included in the training data of current LLMs. We test four recently published LLMs and demonstrate that they express biased assumptions about men and women, specifically those aligned with people's perceptions, rather than those grounded in fact. We additionally study the explanations provided by the models for their choices. In addition to explanations that are explicitly grounded in stereotypes, we find that a significant proportion of explanations are factually inaccurate and likely obscure the true reason behind the models' choices. This highlights a key property of these models: LLMs are trained on unbalanced datasets; as such, even with reinforcement learning with human feedback, they tend to reflect those imbalances back at us. As with other types of societal biases, we suggest that LLMs must be carefully tested to ensure that they treat minoritized individuals and communities equitably.

Related readings and updates.

Leveraging Large Language Models for Exploiting ASR Uncertainty

With the help of creative prompt engineering and in-context learning, large language models (LLMs) are known to generalize well on a variety of text-based natural language processing (NLP) tasks. However, for performing well on spoken language understanding (SLU) tasks, LLMs either need to be equipped with in-built speech modality or they need to rely on speech-to-text conversion from an off-the-shelf automation speech recognition (ASR) system…
See paper details

DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues

*Equal Contributors Controversy is a reflection of our zeitgeist and an important aspect of any discourse. The rise of large language models (LLMs) as conversational systems has increased public reliance on these systems for answers to their various questions. Consequently, it is crucial to systematically examine how these models respond to questions that pertain to ongoing debates. However, few such datasets exist in providing human-annotated…
See paper details