ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04833
  4. Cited By
Adversarial Training for Multimodal Large Language Models against Jailbreak Attacks

Adversarial Training for Multimodal Large Language Models against Jailbreak Attacks

5 March 2025
Liming Lu
Shuchao Pang
Siyuan Liang
Haotian Zhu
Xiyu Zeng
Aishan Liu
Yunhuai Liu
Yongbin Zhou
    AAML
ArXivPDFHTML

Papers citing "Adversarial Training for Multimodal Large Language Models against Jailbreak Attacks"

1 / 1 papers shown
Title
Misaligned Roles, Misplaced Images: Structural Input Perturbations Expose Multimodal Alignment Blind Spots
Misaligned Roles, Misplaced Images: Structural Input Perturbations Expose Multimodal Alignment Blind Spots
Erfan Shayegani
G M Shahariar
Sara Abdali
Lei Yu
Nael B. Abu-Ghazaleh
Yue Dong
AAML
37
0
0
01 Apr 2025
1