ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2512.03087
  4. Cited By
When Harmful Content Gets Camouflaged: Unveiling Perception Failure of LVLMs with CamHarmTI

When Harmful Content Gets Camouflaged: Unveiling Perception Failure of LVLMs with CamHarmTI

29 November 2025
Yanhui Li
Qi Zhou
Zhihong Xu
Huizhong Guo
Wenhai Wang
Dongxia Wang
    VLM
ArXiv (abs)PDFHTML

Papers citing "When Harmful Content Gets Camouflaged: Unveiling Perception Failure of LVLMs with CamHarmTI"

0 / 0 papers shown

No papers found