ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.05791
  4. Cited By
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm
  Reduction

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

11 October 2022
Renee Shelby
Shalaleh Rismani
Kathryn Henne
AJung Moon
Negar Rostamzadeh
P. Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
A. Smart
Emilio Garcia
Gurleen Virk
ArXivPDFHTML

Papers citing "Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction"

50 / 88 papers shown
Title
Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy
Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy
Rushabh Solanki
Meghana Bhange
Ulrich Aïvodji
Elliot Creager
14
0
0
09 May 2025
Opening the Scope of Openness in AI
Opening the Scope of Openness in AI
Tamara Paris
AJung Moon
Jin Guo
24
0
0
09 May 2025
What Is AI Safety? What Do We Want It to Be?
What Is AI Safety? What Do We Want It to Be?
Jacqueline Harding
Cameron Domenico Kirk-Giannini
53
0
0
05 May 2025
MetaHarm: Harmful YouTube Video Dataset Annotated by Domain Experts, GPT-4-Turbo, and Crowdworkers
MetaHarm: Harmful YouTube Video Dataset Annotated by Domain Experts, GPT-4-Turbo, and Crowdworkers
Wonjeong Jo
Magdalena Wojcieszak
19
0
0
22 Apr 2025
Taxonomizing Representational Harms using Speech Act Theory
Taxonomizing Representational Harms using Speech Act Theory
Emily Corvi
Hannah Washington
Stefanie Reed
Chad Atalla
Alexandra Chouldechova
...
Nicholas Pangakis
Emily Sheng
Dan Vann
Matthew Vogel
Hanna M. Wallach
37
0
0
01 Apr 2025
The Backfiring Effect of Weak AI Safety Regulation
The Backfiring Effect of Weak AI Safety Regulation
Benjamin Laufer
Jon Kleinberg
Hoda Heidari
41
0
0
26 Mar 2025
The Case for "Thick Evaluations" of Cultural Representation in AI
The Case for "Thick Evaluations" of Cultural Representation in AI
Rida Qadri
Mark Díaz
Ding Wang
Michael Madaio
39
2
0
24 Mar 2025
In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI
In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI
Shayne Longpre
Kevin Klyman
Ruth E. Appel
Sayash Kapoor
Rishi Bommasani
...
Victoria Westerhoff
Yacine Jernite
Rumman Chowdhury
Percy Liang
Arvind Narayanan
ELM
40
0
0
21 Mar 2025
A Survey on Personalized Alignment -- The Missing Piece for Large Language Models in Real-World Applications
A Survey on Personalized Alignment -- The Missing Piece for Large Language Models in Real-World Applications
Jian-Yu Guan
J. Wu
J. Li
Chuanqi Cheng
Wei Yu Wu
LM&MA
69
0
0
21 Mar 2025
More of the Same: Persistent Representational Harms Under Increased Representation
Jennifer Mickel
Maria De-Arteaga
Leqi Liu
Kevin Tian
34
0
0
01 Mar 2025
AI Mismatches: Identifying Potential Algorithmic Harms Before AI Development
AI Mismatches: Identifying Potential Algorithmic Harms Before AI Development
Devansh Saxena
Ji-Youn Jung
J. Forlizzi
Kenneth Holstein
J. Zimmerman
55
0
0
25 Feb 2025
Addressing the regulatory gap: moving towards an EU AI audit ecosystem beyond the AI Act by including civil society
Addressing the regulatory gap: moving towards an EU AI audit ecosystem beyond the AI Act by including civil society
David Hartmann
José Renato Laranjeira de Pereira
Chiara Streitbörger
Bettina Berendt
86
6
0
20 Feb 2025
Why human-AI relationships need socioaffective alignment
Why human-AI relationships need socioaffective alignment
Hannah Rose Kirk
Iason Gabriel
Chris Summerfield
Bertie Vidgen
Scott A. Hale
35
6
0
04 Feb 2025
Lessons From Red Teaming 100 Generative AI Products
Lessons From Red Teaming 100 Generative AI Products
Blake Bullwinkel
Amanda Minnich
Shiven Chawla
Gary Lopez
Martin Pouliot
...
Pete Bryan
Ram Shankar Siva Kumar
Yonatan Zunger
Chang Kawaguchi
Mark Russinovich
AAML
VLM
37
4
0
13 Jan 2025
Mitigating Trauma in Qualitative Research Infrastructure: Roles for
  Machine Assistance and Trauma-Informed Design
Mitigating Trauma in Qualitative Research Infrastructure: Roles for Machine Assistance and Trauma-Informed Design
Emily Tseng
Thomas Ristenpart
Nicola Dell
62
1
0
22 Dec 2024
Beyond the Safety Bundle: Auditing the Helpful and Harmless Dataset
Beyond the Safety Bundle: Auditing the Helpful and Harmless Dataset
Khaoula Chehbouni
Jonathan Colaço-Carr
Yash More
Jackie CK Cheung
G. Farnadi
71
0
0
12 Nov 2024
Harmful YouTube Video Detection: A Taxonomy of Online Harm and MLLMs as
  Alternative Annotators
Harmful YouTube Video Detection: A Taxonomy of Online Harm and MLLMs as Alternative Annotators
Claire Wonjeong Jo
Miki Wesołowska
Magdalena Wojcieszak
18
4
0
06 Nov 2024
"It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool
  for LLM Adoption in Public Health
"It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool for LLM Adoption in Public Health
Jiawei Zhou
Amy Z. Chen
Darshi Shah
Laura Schwab Reese
Munmun De Choudhury
20
2
0
04 Nov 2024
Towards Leveraging News Media to Support Impact Assessment of AI
  Technologies
Towards Leveraging News Media to Support Impact Assessment of AI Technologies
Mowafak Allaham
Kimon Kieslich
Nicholas Diakopoulos
27
0
0
04 Nov 2024
Troubling Taxonomies in GenAI Evaluation
Troubling Taxonomies in GenAI Evaluation
Glen Berman
Ned Cooper
Wesley Hanwen Deng
Ben Hutchinson
36
0
0
30 Oct 2024
GPT-4o System Card
GPT-4o System Card
OpenAI OpenAI
:
Aaron Hurst
Adam Lerer
Adam P. Goucher
...
Yuchen He
Yuchen Zhang
Yujia Jin
Yunxing Dai
Yury Malkov
MLLM
48
558
0
25 Oct 2024
Sound Check: Auditing Audio Datasets
Sound Check: Auditing Audio Datasets
William Agnew
Julia Barnett
Annie Chu
Rachel Hong
Michael Feffer
Robin Netzorg
Harry H. Jiang
Ezra Awumey
Sauvik Das
31
1
0
17 Oct 2024
Building Solidarity Amid Hostility: Experiences of Fat People in Online
  Communities
Building Solidarity Amid Hostility: Experiences of Fat People in Online Communities
Blakeley H. Payne
Jordan Taylor
Katta Spiel
Casey Fiesler
13
0
0
06 Oct 2024
Generative AI and Perceptual Harms: Who's Suspected of using LLMs?
Generative AI and Perceptual Harms: Who's Suspected of using LLMs?
Kowe Kadoma
D. Metaxa
Mor Naaman
32
3
0
01 Oct 2024
'Simulacrum of Stories': Examining Large Language Models as Qualitative
  Research Participants
'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants
Shivani Kapania
William Agnew
Motahhare Eslami
Hoda Heidari
Sarah E Fox
34
4
0
28 Sep 2024
Lessons for Editors of AI Incidents from the AI Incident Database
Lessons for Editors of AI Incidents from the AI Incident Database
Kevin Paeth
Daniel Atherton
Nikiforos Pittaras
Heather Frase
Sean McGregor
13
1
0
24 Sep 2024
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances
Dhruv Agarwal
Mor Naaman
Aditya Vashistha
29
13
0
17 Sep 2024
Acceptable Use Policies for Foundation Models
Acceptable Use Policies for Foundation Models
Kevin Klyman
26
14
0
29 Aug 2024
Bridging Research and Practice Through Conversation: Reflecting on Our
  Experience
Bridging Research and Practice Through Conversation: Reflecting on Our Experience
Mayra Russo
Mackenzie Jorgensen
Kristen M. Scott
Wendy Xu
Di H. Nguyen
Jessie Finocchiaro
Matthew Olckers
17
1
0
25 Aug 2024
Misfitting With AI: How Blind People Verify and Contest AI Errors
Misfitting With AI: How Blind People Verify and Contest AI Errors
Rahaf Alharbi
P. Lor
Jaylin Herskovitz
S. Schoenebeck
Robin Brewer
29
10
0
13 Aug 2024
Supporting Industry Computing Researchers in Assessing, Articulating,
  and Addressing the Potential Negative Societal Impact of Their Work
Supporting Industry Computing Researchers in Assessing, Articulating, and Addressing the Potential Negative Societal Impact of Their Work
Wesley Hanwen Deng
Solon Barocas
Jennifer Wortman Vaughan
23
2
0
02 Aug 2024
Co-designing an AI Impact Assessment Report Template with AI
  Practitioners and AI Compliance Experts
Co-designing an AI Impact Assessment Report Template with AI Practitioners and AI Compliance Experts
Edyta Bogucka
Marios Constantinides
S. Šćepanović
Daniele Quercia
16
4
0
24 Jul 2024
AI Safety in Generative AI Large Language Models: A Survey
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
34
12
0
06 Jul 2024
Exploring LGBTQ+ Bias in Generative AI Answers across Different Country
  and Religious Contexts
Exploring LGBTQ+ Bias in Generative AI Answers across Different Country and Religious Contexts
L. Vicsek
Anna Vancsó
Mike Zajko
Judit Takacs
14
0
0
03 Jul 2024
A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and
  Automation Harms
A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms
Gavin Abercrombie
Djalel Benbouzid
Paolo Giudici
Delaram Golpayegani
Julio Hernandez
...
Ushnish Sengupta
Arthit Suriyawongful
Ruby Thelot
Sofia Vei
Laura Waltersdorfer
24
5
0
01 Jul 2024
Leveraging Ontologies to Document Bias in Data
Leveraging Ontologies to Document Bias in Data
Mayra Russo
Maria-Esther Vidal
28
1
0
29 Jun 2024
AI Alignment through Reinforcement Learning from Human Feedback?
  Contradictions and Limitations
AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations
Adam Dahlgren Lindstrom
Leila Methnani
Lea Krause
Petter Ericson
Ínigo Martínez de Rituerto de Troya
Dimitri Coelho Mollo
Roel Dobbe
ALM
23
2
0
26 Jun 2024
Data-Centric AI in the Age of Large Language Models
Data-Centric AI in the Age of Large Language Models
Xinyi Xu
Zhaoxuan Wu
Rui Qiao
Arun Verma
Yao Shu
...
Xiaoqiang Lin
Wenyang Hu
Zhongxiang Dai
Pang Wei Koh
Bryan Kian Hsiang Low
ALM
40
2
0
20 Jun 2024
GPT is Not an Annotator: The Necessity of Human Annotation in Fairness
  Benchmark Construction
GPT is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction
Virginia K. Felkner
Jennifer A. Thompson
Jonathan May
19
9
0
24 May 2024
Push and Pull: A Framework for Measuring Attentional Agency on Digital Platforms
Push and Pull: A Framework for Measuring Attentional Agency on Digital Platforms
Zachary Wojtowicz
Shrey Jain
Nicholas Vincent
17
0
0
23 May 2024
Simulating Policy Impacts: Developing a Generative Scenario Writing
  Method to Evaluate the Perceived Effects of Regulation
Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation
Julia Barnett
Kimon Kieslich
Nicholas Diakopoulos
24
3
0
15 May 2024
Who's in and who's out? A case study of multimodal CLIP-filtering in
  DataComp
Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp
Rachel Hong
William Agnew
Tadayoshi Kohno
Jamie Morgenstern
27
9
0
13 May 2024
The Psychosocial Impacts of Generative AI Harms
The Psychosocial Impacts of Generative AI Harms
Faye-Marie Vassel
Evan Shieh
Cassidy R. Sugimoto
T. Monroe-White
34
1
0
02 May 2024
"I'm Not Sure, But...": Examining the Impact of Large Language Models'
  Uncertainty Expression on User Reliance and Trust
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Sunnie S. Y. Kim
Q. V. Liao
Mihaela Vorvoreanu
Steph Ballard
Jennifer Wortman Vaughan
32
49
0
01 May 2024
Near to Mid-term Risks and Opportunities of Open-Source Generative AI
Near to Mid-term Risks and Opportunities of Open-Source Generative AI
Francisco Eiras
Aleksandar Petrov
Bertie Vidgen
Christian Schroeder de Witt
Fabio Pizzati
...
Paul Röttger
Philip H. S. Torr
Trevor Darrell
Y. Lee
Jakob N. Foerster
33
5
0
25 Apr 2024
Holistic Safety and Responsibility Evaluations of Advanced AI Models
Holistic Safety and Responsibility Evaluations of Advanced AI Models
Laura Weidinger
Joslyn Barnhart
Jenny Brennan
Christina Butterfield
Susie Young
...
Sebastian Farquhar
Lewis Ho
Iason Gabriel
Allan Dafoe
William S. Isaac
ELM
19
8
0
22 Apr 2024
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
Nari Johnson
Sanika Moharana
Christina Harrington
Nazanin Andalibi
Hoda Heidari
Motahhare Eslami
19
6
0
21 Apr 2024
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
Evan Shieh
Faye-Marie Vassel
Cassidy R. Sugimoto
T. Monroe-White
30
3
0
11 Apr 2024
GUARD-D-LLM: An LLM-Based Risk Assessment Engine for the Downstream uses
  of LLMs
GUARD-D-LLM: An LLM-Based Risk Assessment Engine for the Downstream uses of LLMs
Sundaraparipurnan Narayanan
Sandeep Vishwakarma
26
3
0
02 Apr 2024
From Representational Harms to Quality-of-Service Harms: A Case Study on
  Llama 2 Safety Safeguards
From Representational Harms to Quality-of-Service Harms: A Case Study on Llama 2 Safety Safeguards
Khaoula Chehbouni
Megha Roshan
Emmanuel Ma
Futian Andrew Wei
Afaf Taik
Jackie CK Cheung
G. Farnadi
27
7
0
20 Mar 2024
12
Next