ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.18140
  4. Cited By
ROBBIE: Robust Bias Evaluation of Large Generative Language Models

ROBBIE: Robust Bias Evaluation of Large Generative Language Models

29 November 2023
David Esiobu
X. Tan
Saghar Hosseini
Megan Ung
Yuchen Zhang
Jude Fernandes
Jane Dwivedi-Yu
Eleonora Presani
Adina Williams
Eric Michael Smith
ArXiv (abs)PDFHTML

Papers citing "ROBBIE: Robust Bias Evaluation of Large Generative Language Models"

15 / 15 papers shown
Title
Evaluating and Improving Robustness in Large Language Models: A Survey and Future Directions
Evaluating and Improving Robustness in Large Language Models: A Survey and Future Directions
Kun Zhang
Le Wu
Kui Yu
Guangyi Lv
Dacao Zhang
AAMLELM
14
0
0
08 Jun 2025
Unintended Harms of Value-Aligned LLMs: Psychological and Empirical Insights
Unintended Harms of Value-Aligned LLMs: Psychological and Empirical Insights
Sooyung Choi
Jaehyeok Lee
Xiaoyuan Yi
Jing Yao
Xing Xie
JinYeong Bak
18
0
0
06 Jun 2025
DIF: A Framework for Benchmarking and Verifying Implicit Bias in LLMs
DIF: A Framework for Benchmarking and Verifying Implicit Bias in LLMs
Lake Yin
Fan Huang
109
1
0
15 May 2025
GraphSeg: Segmented 3D Representations via Graph Edge Addition and Contraction
GraphSeg: Segmented 3D Representations via Graph Edge Addition and Contraction
Haozhan Tang
Tianyi Zhang
Oliver Kroemer
Matthew Johnson-Roberson
Weiming Zhi
3DPC
95
0
0
04 Apr 2025
Social Bias Benchmark for Generation: A Comparison of Generation and QA-Based Evaluations
Social Bias Benchmark for Generation: A Comparison of Generation and QA-Based Evaluations
Jiho Jin
Woosung Kang
Junho Myung
Alice Oh
72
0
0
10 Mar 2025
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Angelina Wang
Michelle Phan
Daniel E. Ho
Sanmi Koyejo
142
2
0
04 Feb 2025
LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs
LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs
Yujun Zhou
Jingdong Yang
Yue Huang
Kehan Guo
Zoe Emory
...
Tian Gao
Werner Geyer
Nuno Moniz
Nitesh Chawla
Xiangliang Zhang
130
7
0
18 Oct 2024
Post-hoc Study of Climate Microtargeting on Social Media Ads with LLMs: Thematic Insights and Fairness Evaluation
Post-hoc Study of Climate Microtargeting on Social Media Ads with LLMs: Thematic Insights and Fairness Evaluation
Tunazzina Islam
Dan Goldwasser
179
2
0
07 Oct 2024
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Robert D Morabito
Sangmitra Madhusudan
Tyler McDonald
Ali Emami
60
2
0
20 Sep 2024
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Riccardo Cantini
Giada Cosenza
A. Orsino
Domenico Talia
AAML
126
7
0
11 Jul 2024
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM
  Compression
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM Compression
Zhichao Xu
Ashim Gupta
Tao Li
Oliver Bentham
Vivek Srikumar
107
13
0
06 Jul 2024
Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Han Jiang
Xiaoyuan Yi
Zhihua Wei
Ziang Xiao
Shu Wang
Xing Xie
ELMALM
160
8
0
20 Jun 2024
Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language Models
Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language Models
Shangqing Tu
Zhuoran Pan
Wenxuan Wang
Zhexin Zhang
Yuliang Sun
Jifan Yu
Hongning Wang
Lei Hou
Juanzi Li
ALM
94
0
0
17 Jun 2024
Exploring Subjectivity for more Human-Centric Assessment of Social
  Biases in Large Language Models
Exploring Subjectivity for more Human-Centric Assessment of Social Biases in Large Language Models
Paula Akemi Aoyagui
Sharon Ferguson
Anastasia Kuzminykh
81
0
0
17 May 2024
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering
Jie Ma
Min Hu
Pinghui Wang
Wangchun Sun
Lingyun Song
Hongbin Pei
Jun Liu
Youtian Du
147
7
0
18 Apr 2024
1