ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.11249
  4. Cited By
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
v1v2 (latest)

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), 2021
23 September 2021
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
    SILMAAML
ArXiv (abs)PDFHTML

Papers citing "FooBaR: Fault Fooling Backdoor Attack on Neural Network Training"

4 / 4 papers shown
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers
Camilo A. Mart´ınez-Mej´ıa
Jesus Solano
J. Breier
Dominik Bucko
Xiaolu Hou
AAML
253
2
0
30 Jul 2024
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural
  Networks
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks
Patrik Velcický
J. Breier
Mladen Kovacevic
Xiaolu Hou
AAML
247
5
0
22 May 2024
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal
  Backdoored Models
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored ModelsIEEE International Conference on Computer Vision (ICCV), 2023
Indranil Sur
Karan Sikka
Matthew Walmer
K. Koneripalli
Anirban Roy
Xiaoyu Lin
Ajay Divakaran
Susmit Jha
183
14
0
07 Aug 2023
An Incremental Gray-box Physical Adversarial Attack on Neural Network
  Training
An Incremental Gray-box Physical Adversarial Attack on Neural Network Training
Rabiah Al-qudah
Moayad Aloqaily
B. Ouni
Mohsen Guizani
T. Lestable
AAML
219
7
0
20 Feb 2023
1
Page 1 of 1