ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.11723
  4. Cited By
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey
  and the Open Libraries Behind Them

Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them

22 January 2024
Chao-Jung Liu
Boxi Chen
Wei Shao
Chris Zhang
Kelvin Wong
Yi Zhang
ArXivPDFHTML

Papers citing "Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them"

17 / 17 papers shown
Title
Effective Ambiguity Attack Against Passport-based DNN Intellectual
  Property Protection Schemes through Fully Connected Layer Substitution
Effective Ambiguity Attack Against Passport-based DNN Intellectual Property Protection Schemes through Fully Connected Layer Substitution
Yiming Chen
Jinyu Tian
Xiangyu Chen
Jiantao Zhou
AAML
24
10
0
21 Mar 2023
Client-specific Property Inference against Secure Aggregation in
  Federated Learning
Client-specific Property Inference against Secure Aggregation in Federated Learning
Raouf Kerkouche
G. Ács
Mario Fritz
FedML
47
9
0
07 Mar 2023
Data Poisoning Attacks Against Multimodal Encoders
Data Poisoning Attacks Against Multimodal Encoders
Ziqing Yang
Xinlei He
Zheng Li
Michael Backes
Mathias Humbert
Pascal Berrang
Yang Zhang
AAML
106
43
0
30 Sep 2022
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
Lukas Struppek
Dominik Hintersdorf
Antonio De Almeida Correia
Antonia Adler
Kristian Kersting
MIACV
45
62
0
28 Jan 2022
Increasing the Cost of Model Extraction with Calibrated Proof of Work
Increasing the Cost of Model Extraction with Calibrated Proof of Work
Adam Dziedzic
Muhammad Ahmad Kaleem
Y. Lu
Nicolas Papernot
FedML
MIACV
AAML
MLAU
55
28
0
23 Jan 2022
Privacy Vulnerability of Split Computing to Data-Free Model Inversion
  Attacks
Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks
Xin Dong
Hongxu Yin
J. Álvarez
Jan Kautz
Pavlo Molchanov
H. T. Kung
MIACV
18
8
0
13 Jul 2021
Admix: Enhancing the Transferability of Adversarial Attacks
Admix: Enhancing the Transferability of Adversarial Attacks
Xiaosen Wang
Xu He
Jingdong Wang
Kun He
AAML
68
192
0
31 Jan 2021
Model Extraction and Defenses on Generative Adversarial Networks
Model Extraction and Defenses on Generative Adversarial Networks
Hailong Hu
Jun Pang
SILM
MIACV
29
14
0
06 Jan 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
47
53
0
24 Oct 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
182
357
0
24 Mar 2020
Cryptanalytic Extraction of Neural Network Models
Cryptanalytic Extraction of Neural Network Models
Nicholas Carlini
Matthew Jagielski
Ilya Mironov
FedML
MLAU
MIACV
AAML
65
134
0
10 Mar 2020
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Minhao Cheng
Simranjit Singh
Patrick H. Chen
Pin-Yu Chen
Sijia Liu
Cho-Jui Hsieh
AAML
122
218
0
24 Sep 2019
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
177
1,014
0
29 Nov 2018
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
256
3,102
0
04 Nov 2016
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
M. Kwiatkowska
Sen Wang
Min Wu
AAML
178
929
0
21 Oct 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
250
5,813
0
08 Jul 2016
1