Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.15236
Cited By
Jailbreaking and Mitigation of Vulnerabilities in Large Language Models
20 October 2024
Benji Peng
Ziqian Bi
Qian Niu
Ming Liu
Pohsun Feng
Tianyang Wang
T. Wang
Lawrence K. Q. Yan
Yizhu Wen
Y. Zhang
Caitlyn Heqi Yin
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Jailbreaking and Mitigation of Vulnerabilities in Large Language Models"
5 / 5 papers shown
Title
XBreaking: Explainable Artificial Intelligence for Jailbreaking LLMs
Marco Arazzi
Vignesh Kumar Kembu
Antonino Nocera
V. P.
76
0
0
30 Apr 2025
Model Risk Management for Generative AI In Financial Institutions
Anwesha Bhattacharyya
Ye Yu
Hanyu Yang
Rahul Singh
Tarun Joshi
Jie Chen
Kiran Yalavarthy
AIFin
MedIm
39
0
0
19 Mar 2025
Probing Latent Subspaces in LLM for AI Security: Identifying and Manipulating Adversarial States
Xin Wei Chia
Jonathan Pan
AAML
39
0
0
12 Mar 2025
From Pixels to Prose: Advancing Multi-Modal Language Models for Remote Sensing
X. Sun
Benji Peng
Charles Zhang
Fei Jin
Qian Niu
...
Ming Li
Pohsun Feng
Ziqian Bi
Ming Liu
Y. Zhang
54
0
0
05 Nov 2024
The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods
Linda Laurier
Ave Giulietta
Arlo Octavia
Meade Cleti
45
3
0
24 Oct 2024
1