ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06861
  4. Cited By
Disentangling Perceptions of Offensiveness: Cultural and Moral
  Correlates

Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates

11 December 2023
Aida Mostafazadeh Davani
Mark Díaz
Dylan K. Baker
Vinodkumar Prabhakaran
    AAML
ArXivPDFHTML

Papers citing "Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates"

9 / 9 papers shown
Title
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring
Buse Sibel Korkmaz
Rahul Nair
Elizabeth M. Daly
Evangelos Anagnostopoulos
Christos Varytimidis
Antonio del Rio Chanona
40
0
0
13 Jan 2025
GRASP: A Disagreement Analysis Framework to Assess Group Associations in
  Perspectives
GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives
Vinodkumar Prabhakaran
Christopher Homan
Lora Aroyo
Aida Mostafazadeh Davani
Alicia Parrish
Alex S. Taylor
Mark Díaz
Ding Wang
Greg Serapio-García
17
9
0
09 Nov 2023
Moral Foundations of Large Language Models
Moral Foundations of Large Language Models
Marwa Abdulhai
Gregory Serapio-Garcia
Clément Crepy
Daria Valter
John Canny
Natasha Jaques
LRM
57
42
0
23 Oct 2023
A Human Rights-Based Approach to Responsible AI
A Human Rights-Based Approach to Responsible AI
Vinodkumar Prabhakaran
Margaret Mitchell
Timnit Gebru
Iason Gabriel
23
35
0
06 Oct 2022
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
225
495
0
28 Sep 2022
How Different Groups Prioritize Ethical Values for Responsible AI
How Different Groups Prioritize Ethical Values for Responsible AI
Maurice Jakesch
Zana Buçinca
Saleema Amershi
Alexandra Olteanu
37
94
0
16 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
235
319
0
21 Aug 2019
1