ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06939
  4. Cited By
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the
  iCub Humanoid

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

23 August 2017
Marco Melis
Ambra Demontis
Battista Biggio
Gavin Brown
Giorgio Fumera
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid"

50 / 52 papers shown
Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks
Decoding Deception: Understanding Automatic Speech Recognition Vulnerabilities in Evasion and Poisoning Attacks
Aravindhan G
Yuvaraj Govindarajulu
Parin Shah
AAML
156
0
0
26 Sep 2025
A Review of the Duality of Adversarial Learning in Network Intrusion:
  Attacks and Countermeasures
A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures
Shalini Saini
Anitha Chennamaneni
Babatunde Sawyerr
AAML
318
5
0
18 Dec 2024
Rethinking the Intermediate Features in Adversarial Attacks: Misleading
  Robotic Models via Adversarial Distillation
Rethinking the Intermediate Features in Adversarial Attacks: Misleading Robotic Models via Adversarial Distillation
Ke Zhao
Huayang Huang
Miao Li
Yu Wu
AAML
349
2
0
21 Nov 2024
Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures
Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures
Pooja Krishan
Rohan Mohapatra
Sanchari Das
Saptarshi Sengupta
AAML
281
5
0
27 Aug 2024
A Hybrid Training-time and Run-time Defense Against Adversarial Attacks
  in Modulation Classification
A Hybrid Training-time and Run-time Defense Against Adversarial Attacks in Modulation Classification
Lu Zhang
S. Lambotharan
G. Zheng
G. Liao
Ambra Demontis
Fabio Roli
AAML
179
16
0
09 Jul 2024
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
Feilong Wang
Xin Wang
X. Ban
AAML
315
37
0
06 Jul 2024
Diffusion Policy Attacker: Crafting Adversarial Attacks for
  Diffusion-based Policies
Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies
Yipu Chen
Haotian Xue
Yongxin Chen
AAML
310
11
0
29 May 2024
Manipulating hidden-Markov-model inferences by corrupting batch data
Manipulating hidden-Markov-model inferences by corrupting batch data
William N. Caballero
Jose Manuel Camacho
Tahir Ekin
Roi Naveiro
AAML
195
4
0
19 Feb 2024
Privacy-preserving and Privacy-attacking Approaches for Speech and Audio
  -- A Survey
Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey
Yuchen Liu
Apu Kapadia
Donald Williamson
AAML
342
3
0
26 Sep 2023
Hardening RGB-D Object Recognition Systems against Adversarial Patch
  Attacks
Hardening RGB-D Object Recognition Systems against Adversarial Patch AttacksInformation Sciences (Inf. Sci.), 2023
Yang Zheng
Christian Scano
Antonio Emanuele Cinà
Xiaoyi Feng
Zhaoqiang Xia
Xiaoyue Jiang
Ambra Demontis
Battista Biggio
Fabio Roli
AAML
273
6
0
13 Sep 2023
Measuring Equality in Machine Learning Security Defenses: A Case Study
  in Speech Recognition
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition
Luke E. Richards
Edward Raff
Cynthia Matuszek
AAML
727
3
0
17 Feb 2023
If a Human Can See It, So Should Your System: Reliability Requirements
  for Machine Vision Components
If a Human Can See It, So Should Your System: Reliability Requirements for Machine Vision ComponentsInternational Conference on Software Engineering (ICSE), 2022
Boyue Caroline Hu
Lina Marsso
Krzysztof Czarnecki
Rick Salay
Huakun Shen
Marsha Chechik
323
33
0
08 Feb 2022
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based
  Repeated Bit Flip Attack
ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip AttackBritish Machine Vision Conference (BMVC), 2021
Dahoon Park
K. Kwon
Sunghoon Im
Jaeha Kung
AAML
235
4
0
01 Nov 2021
Adversarial Attacks and Defenses for Social Network Text Processing
  Applications: Techniques, Challenges and Future Research Directions
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions
I. Alsmadi
Kashif Ahmad
Mahmoud Nazzal
Firoj Alam
Ala I. Al-Fuqaha
Abdallah Khreishah
A. Algosaibi
AAML
203
17
0
26 Oct 2021
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the
  Difference
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the DifferenceInformation Sciences (Inf. Sci.), 2021
Yang Zheng
Xiaoyi Feng
Zhaoqiang Xia
Xiaoyue Jiang
Ambra Demontis
Maura Pintor
Battista Biggio
Fabio Roli
AAML
477
24
0
26 Aug 2021
Deep Bayesian Image Set Classification: A Defence Approach against
  Adversarial Attacks
Deep Bayesian Image Set Classification: A Defence Approach against Adversarial Attacks
N. Mirnateghi
Syed Afaq Ali Shah
Bennamoun
BDLAAML
150
2
0
23 Aug 2021
Adversarial Example Detection for DNN Models: A Review and Experimental
  Comparison
Adversarial Example Detection for DNN Models: A Review and Experimental ComparisonArtificial Intelligence Review (AIR), 2021
Ahmed Aldahdooh
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
804
172
0
01 May 2021
Revisiting Model's Uncertainty and Confidences for Adversarial Example
  Detection
Revisiting Model's Uncertainty and Confidences for Adversarial Example Detection
Ahmed Aldahdooh
W. Hamidouche
Olivier Déforges
AAML
340
36
0
09 Mar 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
250
71
0
09 Feb 2021
Unadversarial Examples: Designing Objects for Robust Vision
Unadversarial Examples: Designing Objects for Robust VisionNeural Information Processing Systems (NeurIPS), 2020
Hadi Salman
Andrew Ilyas
Logan Engstrom
Sai H. Vemprala
Aleksander Madry
Ashish Kapoor
WIGM
325
64
0
22 Dec 2020
Self-Gradient Networks
Self-Gradient Networks
Hossein Aboutalebi
M. Shafiee
AAML
213
0
0
18 Nov 2020
FADER: Fast Adversarial Example Rejection
FADER: Fast Adversarial Example RejectionNeurocomputing (Neurocomputing), 2020
Francesco Crecchi
Marco Melis
Angelo Sotgiu
D. Bacciu
Battista Biggio
AAML
208
15
0
18 Oct 2020
Double Targeted Universal Adversarial Perturbations
Double Targeted Universal Adversarial Perturbations
Philipp Benz
Chaoning Zhang
Tooba Imtiaz
In So Kweon
AAML
279
51
0
07 Oct 2020
Where Does the Robustness Come from? A Study of the Transformation-based
  Ensemble Defence
Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence
Chang Liao
Yao Cheng
Chengfang Fang
Jie Shi
267
1
0
28 Sep 2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A
  Survey
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
S. Silva
Peyman Najafirad
AAMLOOD
436
157
0
01 Jul 2020
X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for
  Classification of Remote Sensing Data
X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing DataIsprs Journal of Photogrammetry and Remote Sensing (ISPRS J. Photogramm. Remote Sens.), 2020
Danfeng Hong
Xiangwei Zhu
Gui-Song Xia
J. Chanussot
X. Zhu
205
216
0
24 Jun 2020
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label
  Classifiers
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers
S. Melacci
Gabriele Ciravegna
Angelo Sotgiu
Ambra Demontis
Battista Biggio
Marco Gori
Fabio Roli
403
21
0
06 Jun 2020
Model-Based Robust Deep Learning: Generalizing to Natural,
  Out-of-Distribution Data
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data
Avi Schwarzschild
Hamed Hassani
George J. Pappas
OOD
339
42
0
20 May 2020
Do Gradient-based Explanations Tell Anything About Adversarial
  Robustness to Android Malware?
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?International Journal of Machine Learning and Cybernetics (IJMLC), 2020
Marco Melis
Michele Scalas
Ambra Demontis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAMLFAtt
238
30
0
04 May 2020
Search Space of Adversarial Perturbations against Image Filters
Search Space of Adversarial Perturbations against Image FiltersInternational Journal of Advanced Computer Science and Applications (IJACSA), 2020
D. D. Thang
Toshihiro Matsui
AAML
157
1
0
05 Mar 2020
secml: A Python Library for Secure and Explainable Machine Learning
secml: A Python Library for Secure and Explainable Machine LearningSoftwareX (SoftwareX), 2019
Maura Pintor
Christian Scano
Angelo Sotgiu
Marco Melis
Ambra Demontis
Battista Biggio
AAML
297
17
0
20 Dec 2019
Adversarial Learning of Deepfakes in Accounting
Adversarial Learning of Deepfakes in Accounting
Marco Schreyer
Timur Sattarov
Bernd Reimer
Damian Borth
AAML
197
26
0
09 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature reviewInformation and Software Technology (IST), 2019
Jin Zhang
Jingyue Li
273
57
0
05 Oct 2019
Deep Neural Rejection against Adversarial Examples
Deep Neural Rejection against Adversarial ExamplesEURASIP Journal on Information Security (EURASIP J. Inf. Secur.), 2019
Angelo Sotgiu
Ambra Demontis
Marco Melis
Battista Biggio
Giorgio Fumera
Xiaoyi Feng
Fabio Roli
AAML
401
78
0
01 Oct 2019
Open DNN Box by Power Side-Channel Attack
Open DNN Box by Power Side-Channel Attack
Yun Xiang
Zhuangzhi Chen
Zuohui Chen
Zebin Fang
Haiyang Hao
Jinyin Chen
Yi Liu
Zhefu Wu
Qi Xuan
Xiaoniu Yang
AAML
182
108
0
21 Jul 2019
Robustness Guarantees for Deep Neural Networks on Videos
Robustness Guarantees for Deep Neural Networks on VideosComputer Vision and Pattern Recognition (CVPR), 2019
Min Wu
Marta Z. Kwiatkowska
AAML
442
25
0
28 Jun 2019
Optimization and Abstraction: A Synergistic Approach for Analyzing
  Neural Network Robustness
Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness
Greg Anderson
Shankara Pailoor
Işıl Dillig
Swarat Chaudhuri
AAML
278
104
0
22 Apr 2019
Defending against Whitebox Adversarial Attacks via Randomized
  Discretization
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Abigail Z. Jacobs
AAML
307
80
0
25 Mar 2019
The Limitations of Model Uncertainty in Adversarial Settings
The Limitations of Model Uncertainty in Adversarial Settings
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
169
39
0
06 Dec 2018
Security for Machine Learning-based Systems: Attacks and Challenges
  during Training and Inference
Security for Machine Learning-based Systems: Attacks and Challenges during Training and Inference
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Mohamed Bennai
AAML
167
23
0
05 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based
  Attacks
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
324
54
0
02 Nov 2018
Low Frequency Adversarial Perturbation
Low Frequency Adversarial PerturbationConference on Uncertainty in Artificial Intelligence (UAI), 2018
Chuan Guo
Jared S. Frank
Kilian Q. Weinberger
AAML
351
194
0
24 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILMAAML
399
11
0
08 Sep 2018
A Game-Based Approximate Verification of Deep Neural Networks with
  Provable Guarantees
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
Min Wu
Matthew Wicker
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
AAML
274
115
0
10 Jul 2018
Killing four birds with one Gaussian process: the relation between
  different test-time attacks
Killing four birds with one Gaussian process: the relation between different test-time attacks
Kathrin Grosse
M. Smith
Michael Backes
AAML
344
2
0
06 Jun 2018
Adversarial Attacks Against Medical Deep Learning Systems
Adversarial Attacks Against Medical Deep Learning Systems
S. G. Finlayson
Hyung Won Chung
I. Kohane
Andrew L. Beam
SILMAAMLOODMedIm
335
255
0
15 Apr 2018
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with
  Adversarial Examples
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Minhao Cheng
Jinfeng Yi
Pin-Yu Chen
Huan Zhang
Cho-Jui Hsieh
SILMAAML
559
267
0
03 Mar 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A
  Survey
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Lin Wang
AAML
705
2,036
0
02 Jan 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
475
1,594
0
08 Dec 2017
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
322
9
0
17 Nov 2017
12
Next
Page 1 of 2