Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.08966
Cited By
Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks
17 June 2022
Anthony M. Barrett
Dan Hendrycks
Jessica Newman
Brandie Nonnecke
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks"
5 / 5 papers shown
Title
GUARD-D-LLM: An LLM-Based Risk Assessment Engine for the Downstream uses of LLMs
Sundaraparipurnan Narayanan
Sandeep Vishwakarma
34
3
0
02 Apr 2024
Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
Elizabeth Seger
Noemi Dreksler
Richard Moulange
Emily Dardaman
Jonas Schuett
...
Emma Bluemke
Michael Aird
Patrick Levermore
Julian Hazell
Abhishek Gupta
11
40
0
29 Sep 2023
Can LLM-Generated Misinformation Be Detected?
Canyu Chen
Kai Shu
DeLMO
29
158
0
25 Sep 2023
Both eyes open: Vigilant Incentives help Regulatory Markets improve AI Safety
Paolo Bova
A. D. Stefano
H. Anh
18
4
0
06 Mar 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
311
11,915
0
04 Mar 2022
1