ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.15350
  4. Cited By
Farsight: Fostering Responsible AI Awareness During AI Application
  Prototyping

Farsight: Fostering Responsible AI Awareness During AI Application Prototyping

23 February 2024
Zijie J. Wang
Chinmay Kulkarni
Lauren Wilcox
Michael Terry
Michael A. Madaio
ArXivPDFHTML

Papers citing "Farsight: Fostering Responsible AI Awareness During AI Application Prototyping"

14 / 14 papers shown
Title
AI Awareness
AI Awareness
X. Li
Haoyuan Shi
Rongwu Xu
Wei Xu
54
0
0
25 Apr 2025
Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Rock Yuren Pang
Hope Schroeder
Kynnedy Simone Smith
Solon Barocas
Ziang Xiao
Emily Tseng
Danielle Bragg
73
3
0
22 Jan 2025
The Participatory Turn in AI Design: Theoretical Foundations and the
  Current State of Practice
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
Fernando Delgado
Stephen Yang
Michael A. Madaio
Qian Yang
71
98
0
02 Oct 2023
On the Risk of Misinformation Pollution with Large Language Models
On the Risk of Misinformation Pollution with Large Language Models
Yikang Pan
Liangming Pan
Wenhu Chen
Preslav Nakov
Min-Yen Kan
W. Wang
DeLMO
190
105
0
23 May 2023
Walking the Walk of AI Ethics: Organizational Challenges and the
  Individualization of Risk among Ethics Entrepreneurs
Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs
Sanna Ali
Angèle Christin
A. Smart
R. Katila
19
34
0
16 May 2023
From plane crashes to algorithmic harm: applicability of safety
  engineering frameworks for responsible ML
From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Shalaleh Rismani
Renee Shelby
A. Smart
Edgar W. Jatho
Joshua A. Kroll
AJung Moon
Negar Rostamzadeh
27
36
0
06 Oct 2022
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
225
495
0
28 Sep 2022
Moral Mimicry: Large Language Models Produce Moral Rationalizations
  Tailored to Political Identity
Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity
Gabriel Simmons
98
57
0
24 Sep 2022
Interpretability, Then What? Editing Machine Learning Models to Reflect
  Human Knowledge and Values
Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Zijie J. Wang
Alex Kale
Harsha Nori
P. Stella
M. Nunnally
Duen Horng Chau
Mihaela Vorvoreanu
J. W. Vaughan
R. Caruana
KELM
46
27
0
30 Jun 2022
Challenges in Detoxifying Language Models
Challenges in Detoxifying Language Models
Johannes Welbl
Amelia Glaese
J. Uesato
Sumanth Dathathri
John F. J. Mellor
Lisa Anne Hendricks
Kirsty Anderson
Pushmeet Kohli
Ben Coppin
Po-Sen Huang
LM&MA
242
191
0
15 Sep 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
257
374
0
28 Feb 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
730
0
13 Dec 2018
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,079
0
24 Oct 2016
1