ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.15043
  4. Cited By
Universal and Transferable Adversarial Attacks on Aligned Language
  Models

Universal and Transferable Adversarial Attacks on Aligned Language Models

27 July 2023
Andy Zou
Zifan Wang
Nicholas Carlini
Milad Nasr
J. Zico Kolter
Matt Fredrikson
ArXivPDFHTML

Papers citing "Universal and Transferable Adversarial Attacks on Aligned Language Models"

50 / 938 papers shown
Title
Latent Guard: a Safety Framework for Text-to-image Generation
Latent Guard: a Safety Framework for Text-to-image Generation
Runtao Liu
Ashkan Khakzar
Jindong Gu
Qifeng Chen
Philip H. S. Torr
Fabio Pizzati
21
23
0
11 Apr 2024
AmpleGCG: Learning a Universal and Transferable Generative Model of
  Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs
AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs
Zeyi Liao
Huan Sun
AAML
39
73
0
11 Apr 2024
Best Practices and Lessons Learned on Synthetic Data for Language Models
Best Practices and Lessons Learned on Synthetic Data for Language Models
Ruibo Liu
Jerry W. Wei
Fangyu Liu
Chenglei Si
Yanzhe Zhang
...
Steven Zheng
Daiyi Peng
Diyi Yang
Denny Zhou
Andrew M. Dai
SyDa
EgoV
41
85
0
11 Apr 2024
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs
Bibek Upadhayay
Vahid Behzadan
AAML
21
13
0
09 Apr 2024
Rethinking How to Evaluate Language Model Jailbreak
Rethinking How to Evaluate Language Model Jailbreak
Hongyu Cai
Arjun Arunasalam
Leo Y. Lin
Antonio Bianchi
Z. Berkay Celik
ALM
32
5
0
09 Apr 2024
On adversarial training and the 1 Nearest Neighbor classifier
On adversarial training and the 1 Nearest Neighbor classifier
Amir Hagai
Yair Weiss
AAML
50
0
0
09 Apr 2024
CodecLM: Aligning Language Models with Tailored Synthetic Data
CodecLM: Aligning Language Models with Tailored Synthetic Data
Zifeng Wang
Chun-Liang Li
Vincent Perot
Long T. Le
Jin Miao
Zizhao Zhang
Chen-Yu Lee
Tomas Pfister
SyDa
ALM
23
17
0
08 Apr 2024
Have You Merged My Model? On The Robustness of Large Language Model IP
  Protection Methods Against Model Merging
Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging
Tianshuo Cong
Delong Ran
Zesen Liu
Xinlei He
Jinyuan Liu
Yichen Gong
Qi Li
Anyu Wang
Xiaoyun Wang
MoMe
38
7
0
08 Apr 2024
Capabilities of Large Language Models in Control Engineering: A
  Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra
Capabilities of Large Language Models in Control Engineering: A Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra
Darioush Kevian
U. Syed
Xing-ming Guo
Aaron J. Havens
Geir Dullerud
Peter M. Seiler
Lianhui Qin
Bin Hu
ELM
36
29
0
04 Apr 2024
Embodied AI with Two Arms: Zero-shot Learning, Safety and Modularity
Embodied AI with Two Arms: Zero-shot Learning, Safety and Modularity
Jacob Varley
Sumeet Singh
Deepali Jain
Krzysztof Choromanski
Andy Zeng
Somnath Basu Roy Chowdhury
Kumar Avinava Dubey
Vikas Sindhwani
LM&Ro
20
14
0
04 Apr 2024
Vocabulary Attack to Hijack Large Language Model Applications
Vocabulary Attack to Hijack Large Language Model Applications
Patrick Levi
Christoph P. Neumann
AAML
16
8
0
03 Apr 2024
Risks from Language Models for Automated Mental Healthcare: Ethics and
  Structure for Implementation
Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation
D. Grabb
Max Lamparth
N. Vasan
40
14
0
02 Apr 2024
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
AAML
81
158
0
02 Apr 2024
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
M. Russinovich
Ahmed Salem
Ronen Eldan
39
77
0
02 Apr 2024
Interpreting Key Mechanisms of Factual Recall in Transformer-Based
  Language Models
Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models
Ang Lv
Yuhan Chen
Kaiyi Zhang
Yulong Wang
Lifeng Liu
Ji-Rong Wen
Jian Xie
Rui Yan
KELM
32
16
0
28 Mar 2024
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large
  Language Models
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
Patrick Chao
Edoardo Debenedetti
Alexander Robey
Maksym Andriushchenko
Francesco Croce
...
Nicolas Flammarion
George J. Pappas
F. Tramèr
Hamed Hassani
Eric Wong
ALM
ELM
AAML
52
94
0
28 Mar 2024
Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation
Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation
Yutong He
Alexander Robey
Naoki Murata
Yiding Jiang
J. Williams
George Pappas
Hamed Hassani
Yuki Mitsufuji
Ruslan Salakhutdinov
J. Zico Kolter
DiffM
94
4
0
28 Mar 2024
Can multiple-choice questions really be useful in detecting the
  abilities of LLMs?
Can multiple-choice questions really be useful in detecting the abilities of LLMs?
Wangyue Li
Liangzhi Li
Tong Xiang
Xiao Liu
Wei Deng
Noa Garcia
ELM
42
28
0
26 Mar 2024
Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Jiawen Shi
Zenghui Yuan
Yinuo Liu
Yue Huang
Pan Zhou
Lichao Sun
Neil Zhenqiang Gong
AAML
43
39
0
26 Mar 2024
The Strong Pull of Prior Knowledge in Large Language Models and Its
  Impact on Emotion Recognition
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
27
5
0
25 Mar 2024
Language Models in Dialogue: Conversational Maxims for Human-AI
  Interactions
Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Erik Miehling
Manish Nagireddy
P. Sattigeri
Elizabeth M. Daly
David Piorkowski
John T. Richards
ALM
29
11
0
22 Mar 2024
Testing the Limits of Jailbreaking Defenses with the Purple Problem
Testing the Limits of Jailbreaking Defenses with the Purple Problem
Taeyoun Kim
Suhas Kotha
Aditi Raghunathan
AAML
36
6
0
20 Mar 2024
Defending Against Indirect Prompt Injection Attacks With Spotlighting
Defending Against Indirect Prompt Injection Attacks With Spotlighting
Keegan Hines
Gary Lopez
Matthew Hall
Federico Zarfati
Yonatan Zunger
Emre Kiciman
AAML
SILM
24
37
0
20 Mar 2024
As Firm As Their Foundations: Can open-sourced foundation models be used
  to create adversarial examples for downstream tasks?
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Anjun Hu
Jindong Gu
Francesco Pinto
Konstantinos Kamnitsas
Philip H. S. Torr
AAML
SILM
32
5
0
19 Mar 2024
RigorLLM: Resilient Guardrails for Large Language Models against
  Undesired Content
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content
Zhuowen Yuan
Zidi Xiong
Yi Zeng
Ning Yu
Ruoxi Jia
D. Song
Bo-wen Li
AAML
KELM
40
38
0
19 Mar 2024
Embodied LLM Agents Learn to Cooperate in Organized Teams
Embodied LLM Agents Learn to Cooperate in Organized Teams
Xudong Guo
Kaixuan Huang
Jiale Liu
Wenhui Fan
Natalia Vélez
Qingyun Wu
Huazheng Wang
Thomas L. Griffiths
Mengdi Wang
LM&Ro
LLMAG
46
38
0
19 Mar 2024
SelfIE: Self-Interpretation of Large Language Model Embeddings
SelfIE: Self-Interpretation of Large Language Model Embeddings
Haozhe Chen
Carl Vondrick
Chengzhi Mao
19
17
0
16 Mar 2024
Mitigating Dialogue Hallucination for Large Vision Language Models via
  Adversarial Instruction Tuning
Mitigating Dialogue Hallucination for Large Vision Language Models via Adversarial Instruction Tuning
Dongmin Park
Zhaofang Qian
Guangxing Han
Ser-Nam Lim
MLLM
35
0
0
15 Mar 2024
Towards White Box Deep Learning
Towards White Box Deep Learning
Maciej Satkiewicz
AAML
32
1
0
14 Mar 2024
Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models
Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models
Yifan Li
Hangyu Guo
Kun Zhou
Wayne Xin Zhao
Ji-Rong Wen
50
38
0
14 Mar 2024
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts
  on Vision-Language Models
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Haochen Luo
Jindong Gu
Fengyuan Liu
Philip H. S. Torr
VLM
VPVLM
AAML
47
20
0
14 Mar 2024
A Moral Imperative: The Need for Continual Superalignment of Large
  Language Models
A Moral Imperative: The Need for Continual Superalignment of Large Language Models
Gokul Puthumanaillam
Manav Vora
Pranay Thangeda
Melkior Ornik
29
7
0
13 Mar 2024
CodeAttack: Revealing Safety Generalization Challenges of Large Language
  Models via Code Completion
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
Qibing Ren
Chang Gao
Jing Shao
Junchi Yan
Xin Tan
Wai Lam
Lizhuang Ma
ALM
ELM
AAML
42
21
0
12 Mar 2024
Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?
Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?
Egor Zverev
Sahar Abdelnabi
Soroush Tabesh
Mario Fritz
Christoph H. Lampert
43
19
0
11 Mar 2024
Detectors for Safe and Reliable LLMs: Implementations, Uses, and
  Limitations
Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Swapnaja Achintalwar
Adriana Alvarado Garcia
Ateret Anaby-Tavor
Ioana Baldini
Sara E. Berger
...
Aashka Trivedi
Kush R. Varshney
Dennis L. Wei
Shalisha Witherspooon
Marcel Zalmanovici
25
10
0
09 Mar 2024
Defending Against Unforeseen Failure Modes with Latent Adversarial
  Training
Defending Against Unforeseen Failure Modes with Latent Adversarial Training
Stephen Casper
Lennart Schulze
Oam Patel
Dylan Hadfield-Menell
AAML
49
27
0
08 Mar 2024
Automatic and Universal Prompt Injection Attacks against Large Language
  Models
Automatic and Universal Prompt Injection Attacks against Large Language Models
Xiaogeng Liu
Zhiyuan Yu
Yizhe Zhang
Ning Zhang
Chaowei Xiao
SILM
AAML
38
33
0
07 Mar 2024
A Safe Harbor for AI Evaluation and Red Teaming
A Safe Harbor for AI Evaluation and Red Teaming
Shayne Longpre
Sayash Kapoor
Kevin Klyman
Ashwin Ramaswami
Rishi Bommasani
...
Daniel Kang
Sandy Pentland
Arvind Narayanan
Percy Liang
Peter Henderson
49
38
0
07 Mar 2024
MedSafetyBench: Evaluating and Improving the Medical Safety of Large
  Language Models
MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models
Tessa Han
Aounon Kumar
Chirag Agarwal
Himabindu Lakkaraju
ELM
LM&MA
AI4MH
31
4
0
06 Mar 2024
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
Nathaniel Li
Alexander Pan
Anjali Gopal
Summer Yue
Daniel Berrios
...
Yan Shoshitaishvili
Jimmy Ba
K. Esvelt
Alexandr Wang
Dan Hendrycks
ELM
43
140
0
05 Mar 2024
Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Stav Cohen
Ron Bitton
Ben Nassi
30
17
0
05 Mar 2024
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated
  Large Language Model Agents
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents
Qiusi Zhan
Zhixiang Liang
Zifan Ying
Daniel Kang
LLMAG
44
72
0
05 Mar 2024
Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs
Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs
Aly M. Kassem
Omar Mahmoud
Niloofar Mireshghallah
Hyunwoo J. Kim
Yulia Tsvetkov
Yejin Choi
Sherif Saad
Santu Rana
47
18
0
05 Mar 2024
Enhancing LLM Safety via Constrained Direct Preference Optimization
Enhancing LLM Safety via Constrained Direct Preference Optimization
Zixuan Liu
Xiaolin Sun
Zizhan Zheng
33
20
0
04 Mar 2024
Breaking Down the Defenses: A Comparative Survey of Attacks on Large
  Language Models
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models
Arijit Ghosh Chowdhury
Md. Mofijul Islam
Vaibhav Kumar
F. H. Shezan
Vaibhav Kumar
Vinija Jain
Aman Chadha
AAML
PILM
34
29
0
03 Mar 2024
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
Yifan Zeng
Yiran Wu
Xiao Zhang
Huazheng Wang
Qingyun Wu
LLMAG
AAML
40
59
0
02 Mar 2024
Accelerating Greedy Coordinate Gradient via Probe Sampling
Accelerating Greedy Coordinate Gradient via Probe Sampling
Yiran Zhao
Wenyue Zheng
Tianle Cai
Xuan Long Do
Kenji Kawaguchi
Anirudh Goyal
Michael Shieh
43
2
0
02 Mar 2024
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by
  Exploring Refusal Loss Landscapes
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes
Xiaomeng Hu
Pin-Yu Chen
Tsung-Yi Ho
AAML
24
26
0
01 Mar 2024
Curiosity-driven Red-teaming for Large Language Models
Curiosity-driven Red-teaming for Large Language Models
Zhang-Wei Hong
Idan Shenfeld
T. Wang
Yung-Sung Chuang
Aldo Pareja
James R. Glass
Akash Srivastava
Pulkit Agrawal
LRM
28
39
0
29 Feb 2024
Exploring the Potential of Large Language Models for Improving Digital Forensic Investigation Efficiency
Exploring the Potential of Large Language Models for Improving Digital Forensic Investigation Efficiency
Akila Wickramasekara
F. Breitinger
Mark Scanlon
42
8
0
29 Feb 2024
Previous
123...121314...171819
Next