ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.01703
  4. Cited By
UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models

UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models

3 November 2024
Sejoon Oh
Yiqiao Jin
Megha Sharma
Donghyun Kim
Eric Ma
Gaurav Verma
Srijan Kumar
ArXivPDFHTML

Papers citing "UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models"

4 / 4 papers shown
Title
No Free Lunch with Guardrails
No Free Lunch with Guardrails
Divyanshu Kumar
Nitin Aravind Birur
Tanay Baswa
Sahil Agarwal
P. Harshangi
50
0
0
01 Apr 2025
ScreenLLM: Stateful Screen Schema for Efficient Action Understanding and Prediction
ScreenLLM: Stateful Screen Schema for Efficient Action Understanding and Prediction
Yiqiao Jin
Stefano Petrangeli
Yu Shen
Gang Wu
LLMAG
LM&Ro
66
0
0
26 Mar 2025
EigenShield: Causal Subspace Filtering via Random Matrix Theory for Adversarially Robust Vision-Language Models
EigenShield: Causal Subspace Filtering via Random Matrix Theory for Adversarially Robust Vision-Language Models
Nastaran Darabi
Devashri Naik
Sina Tayebati
Dinithi Jayasuriya
Ranganath Krishnan
A. R. Trivedi
AAML
32
0
0
24 Feb 2025
Jailbreak Attacks and Defenses against Multimodal Generative Models: A
  Survey
Jailbreak Attacks and Defenses against Multimodal Generative Models: A Survey
Xuannan Liu
Xing Cui
Peipei Li
Zekun Li
Huaibo Huang
Shuhan Xia
Miaoxuan Zhang
Yueying Zou
Ran He
AAML
48
4
0
14 Nov 2024
1