Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.02651
Cited By
Can Reinforcement Learning Unlock the Hidden Dangers in Aligned Large Language Models?
5 August 2024
Mohammad Bahrami Karkevandi
Nishant Vishwamitra
Peyman Najafirad
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can Reinforcement Learning Unlock the Hidden Dangers in Aligned Large Language Models?"
5 / 5 papers shown
Title
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts
Jiahao Yu
Xingwei Lin
Zheng Yu
Xinyu Xing
SILM
110
292
0
19 Sep 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Challenges in Detoxifying Language Models
Johannes Welbl
Amelia Glaese
J. Uesato
Sumanth Dathathri
John F. J. Mellor
Lisa Anne Hendricks
Kirsty Anderson
Pushmeet Kohli
Ben Coppin
Po-Sen Huang
LM&MA
236
191
0
15 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
228
909
0
21 Apr 2018
1