Gender Bias in LLMs
AuthorsHadas Kotek, Rikker Dockum, David Q. Sun
AuthorsHadas Kotek, Rikker Dockum, David Q. Sun
Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known stumbling block for prior models. We propose a simple paradigm to test the presence of gender bias, building on but differing from WinoBias, a commonly used gender bias dataset which is likely to be included in the training data of current LLMs. We test four recently published LLMs and demonstrate that they express biased assumptions about men and women, specifically those aligned with people's perceptions, rather than those grounded in fact. We additionally study the explanations provided by the models for their choices. In addition to explanations that are explicitly grounded in stereotypes, we find that a significant proportion of explanations are factually inaccurate and likely obscure the true reason behind the models' choices. This highlights a key property of these models: LLMs are trained on unbalanced datasets; as such, even with reinforcement learning with human feedback, they tend to reflect those imbalances back at us. As with other types of societal biases, we suggest that LLMs must be carefully tested to ensure that they treat minoritized individuals and communities equitably.