ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.10987
  4. Cited By
Investigating Vulnerability to Adversarial Examples on Multimodal Data
  Fusion in Deep Learning

Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning

22 May 2020
Youngjoon Yu
Hong Joo Lee
Byeong Cheon Kim
Jung Uk Kim
Yong Man Ro
    AAML
ArXivPDFHTML

Papers citing "Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning"

13 / 13 papers shown
Title
Revisiting Misalignment in Multispectral Pedestrian Detection: A
  Language-Driven Approach for Cross-modal Alignment Fusion
Revisiting Misalignment in Multispectral Pedestrian Detection: A Language-Driven Approach for Cross-modal Alignment Fusion
Taeheon Kim
Sangyun Chung
Youngjoon Yu
Y. Ro
CVBM
71
1
0
27 Nov 2024
AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal
  Image Captioning
AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal Image Captioning
Maisha Binte Rashid
Pablo Rivas
17
2
0
30 Jul 2024
Quantifying and Enhancing Multi-modal Robustness with Modality
  Preference
Quantifying and Enhancing Multi-modal Robustness with Modality Preference
Zequn Yang
Yake Wei
Ce Liang
Di Hu
AAML
19
9
0
09 Feb 2024
What Makes for Robust Multi-Modal Models in the Face of Missing
  Modalities?
What Makes for Robust Multi-Modal Models in the Face of Missing Modalities?
Siting Li
Chenzhuang Du
Yue Zhao
Yu Huang
Hang Zhao
19
4
0
10 Oct 2023
Impact of architecture on robustness and interpretability of
  multispectral deep neural networks
Impact of architecture on robustness and interpretability of multispectral deep neural networks
Charles Godfrey
Elise Bishoff
Myles Mckay
E. Byler
27
0
0
21 Sep 2023
Hardening RGB-D Object Recognition Systems against Adversarial Patch
  Attacks
Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Yang Zheng
Luca Demetrio
Antonio Emanuele Cinà
Xiaoyi Feng
Zhaoqiang Xia
Xiaoyue Jiang
Ambra Demontis
Battista Biggio
Fabio Roli
AAML
20
2
0
13 Sep 2023
Quantifying the robustness of deep multispectral segmentation models
  against natural perturbations and data poisoning
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning
Elise Bishoff
Charles Godfrey
Myles Mckay
E. Byler
AAML
11
1
0
18 May 2023
Adversarial Vulnerability of Temporal Feature Networks for Object
  Detection
Adversarial Vulnerability of Temporal Feature Networks for Object Detection
Svetlana Pavlitskaya
Nikolai Polley
Michael Weber
J. Marius Zöllner
AAML
14
2
0
23 Aug 2022
Contrastive language and vision learning of general fashion concepts
Contrastive language and vision learning of general fashion concepts
P. Chia
Giuseppe Attanasio
Federico Bianchi
Silvia Terragni
A. Magalhães
Diogo Gonçalves
C. Greco
Jacopo Tagliabue
CLIP
15
42
0
08 Apr 2022
Dual-Key Multimodal Backdoors for Visual Question Answering
Dual-Key Multimodal Backdoors for Visual Question Answering
Matthew Walmer
Karan Sikka
Indranil Sur
Abhinav Shrivastava
Susmit Jha
AAML
16
34
0
14 Dec 2021
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
  Self Driving
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
James Tu
Huichen Li
Xinchen Yan
Mengye Ren
Yun Chen
Ming Liang
E. Bitar
Ersin Yumer
R. Urtasun
AAML
18
75
0
17 Jan 2021
Adversarial Robustness of Deep Sensor Fusion Models
Adversarial Robustness of Deep Sensor Fusion Models
Shaojie Wang
Tong Wu
Ayan Chakrabarti
Yevgeniy Vorobeychik
AAML
23
10
0
23 Jun 2020
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image
  Segmentation
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
Vijay Badrinarayanan
Alex Kendall
R. Cipolla
SSeg
446
15,637
0
02 Nov 2015
1