ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01456
  4. Cited By
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation

Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation

1 May 2025
Vaidehi Patil
Yi-Lin Sung
Peter Hase
Jie Peng
Tianlong Chen
Mohit Bansal
    AAML
    MU
ArXivPDFHTML

Papers citing "Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation"

3 / 3 papers shown
Title
Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models
Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models
Jiaming Ji
X. Chen
Rui Pan
Han Zhu
C. Zhang
...
Juntao Dai
Chi-Min Chan
Sirui Han
Yike Guo
Y. Yang
OffRL
38
1
0
22 Mar 2025
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
Vaidehi Patil
Elias Stengel-Eskin
Mohit Bansal
MU
CLL
28
0
0
20 Feb 2025
SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation
SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation
Jaehong Yoon
Shoubin Yu
Vaidehi Patil
Huaxiu Yao
Mohit Bansal
21
12
0
16 Oct 2024
1