354
v1v2 (latest)

Social Bias Benchmark for Generation: A Comparison of Generation and QA-Based Evaluations

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Main:2 Pages
5 Figures
Bibliography:1 Pages
8 Tables
Appendix:11 Pages
Abstract

Measuring social bias in large language models (LLMs) is crucial, but existing bias evaluation methods struggle to assess bias in long-form generation. We propose a Bias Benchmark for Generation (BBG), an adaptation of the Bias Benchmark for QA (BBQ), designed to evaluate social bias in long-form generation by having LLMs generate continuations of story prompts. Building our benchmark in English and Korean, we measure the probability of neutral and biased generations across ten LLMs. We also compare our long-form story generation evaluation results with multiple-choice BBQ evaluation, showing that the two approaches produce inconsistent results.

View on arXiv
Comments on this paper