Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.06098
Cited By
The mathematics of adversarial attacks in AI -- Why deep learning is unstable despite the existence of stable neural networks
13 September 2021
Alexander Bastounis
A. Hansen
Verner Vlacic
AAML
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The mathematics of adversarial attacks in AI -- Why deep learning is unstable despite the existence of stable neural networks"
20 / 20 papers shown
Title
Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Ability
Lijia Yu
Yibo Miao
Yifan Zhu
Xiao-Shan Gao
Lijun Zhang
48
0
0
06 Mar 2025
Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization
Holger Boche
Vít Fojtík
Adalbert Fono
Gitta Kutyniok
24
0
0
12 Aug 2024
Weakly Supervised Learners for Correction of AI Errors with Provable Performance Guarantees
I. Tyukin
T. Tyukina
Daniel van Helden
Zedong Zheng
Evgeny M. Mirkes
Oliver J. Sutton
Qinghua Zhou
Alexander N. Gorban
Penelope Allison
16
1
0
31 Jan 2024
Mathematical Algorithm Design for Deep Learning under Societal and Judicial Constraints: The Algorithmic Transparency Requirement
Holger Boche
Adalbert Fono
Gitta Kutyniok
FaML
23
4
0
18 Jan 2024
Do stable neural networks exist for classification problems? -- A new view on stability in AI
Z. N. D. Liu
A. C. Hansen
15
0
0
15 Jan 2024
The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
Alexander Bastounis
Alexander N. Gorban
Anders C. Hansen
D. Higham
Danil Prokhorov
Oliver J. Sutton
I. Tyukin
Qinghua Zhou
OOD
6
4
0
13 Sep 2023
How adversarial attacks can disrupt seemingly stable accurate classifiers
Oliver J. Sutton
Qinghua Zhou
I. Tyukin
Alexander N. Gorban
Alexander Bastounis
D. Higham
AAML
14
1
0
07 Sep 2023
Can We Rely on AI?
D. Higham
AAML
20
0
0
29 Aug 2023
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Lucas Beerens
D. Higham
AAML
7
8
0
05 Jun 2023
Ambiguity in solving imaging inverse problems with deep learning based operators
David Evangelista
E. Morotti
E. L. Piccolomini
J. Nagy
14
8
0
31 May 2023
To be or not to be stable, that is the question: understanding neural networks for inverse problems
David Evangelista
J. Nagy
E. Morotti
E. L. Piccolomini
10
3
0
24 Nov 2022
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game
Xiao-Shan Gao
Shuang Liu
Lijia Yu
AAML
6
0
0
17 Jul 2022
Adversarial Parameter Attack on Deep Neural Networks
Lijia Yu
Yihan Wang
Xiao-Shan Gao
AAML
14
8
0
20 Mar 2022
Limitations of Deep Learning for Inverse Problems on Digital Hardware
Holger Boche
Adalbert Fono
Gitta Kutyniok
16
25
0
28 Feb 2022
A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
Xiaoyu Ma
S. Sardy
N. Hengartner
Nikolai Bobenko
Yen Ting Lin
9
2
0
21 Jan 2022
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Lijia Yu
Xiao-Shan Gao
AAML
11
5
0
08 Nov 2021
A Robust Classification-autoencoder to Defend Outliers and Adversaries
Lijia Yu
Xiao-Shan Gao
AAML
14
2
0
30 Jun 2021
The Feasibility and Inevitability of Stealth Attacks
I. Tyukin
D. Higham
Alexander Bastounis
Eliyas Woldegeorgis
Alexander N. Gorban
AAML
4
19
0
26 Jun 2021
Can stable and accurate neural networks be computed? -- On the barriers of deep learning and Smale's 18th problem
Matthew J. Colbrook
Vegard Antun
A. Hansen
60
129
0
20 Jan 2021
The troublesome kernel -- On hallucinations, no free lunches and the accuracy-stability trade-off in inverse problems
N. Gottschling
Vegard Antun
A. Hansen
Ben Adcock
8
30
0
05 Jan 2020
1