ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.09259
  4. Cited By
Jailbreak Attacks and Defenses against Multimodal Generative Models: A
  Survey

Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey

14 November 2024
Xuannan Liu
Xing Cui
Peipei Li
Zekun Li
Huaibo Huang
Shuhan Xia
Miaoxuan Zhang
Yueying Zou
Ran He
    AAML
ArXivPDFHTML

Papers citing "Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey"

2 / 2 papers shown
Title
UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models
UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models
Sejoon Oh
Yiqiao Jin
Megha Sharma
Donghyun Kim
Eric Ma
Gaurav Verma
Srijan Kumar
40
5
0
03 Nov 2024
Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient
Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient
Yongliang Wu
Shiji Zhou
Mingzhuo Yang
Lianzhe Wang
Wenbo Zhu
Heng Chang
Xiao Zhou
Xu Yang
Xu Yang
43
18
0
24 May 2024
1