ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.1897
  4. Cited By
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
v1v2v3v4 (latest)

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Computer Vision and Pattern Recognition (CVPR), 2014
5 December 2014
Anh Totti Nguyen
J. Yosinski
Jeff Clune
    AAML
ArXiv (abs)PDFHTML

Papers citing "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images"

5 / 1,455 papers shown
What Do Deep CNNs Learn About Objects?
What Do Deep CNNs Learn About Objects?International Conference on Learning Representations (ICLR), 2015
Xingchao Peng
Baochen Sun
Karim Ali
Kate Saenko
3DPC
134
3
0
09 Apr 2015
Analysis of classifiers' robustness to adversarial perturbations
Analysis of classifiers' robustness to adversarial perturbations
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
251
369
0
09 Feb 2015
Why does Deep Learning work? - A perspective from Group Theory
Why does Deep Learning work? - A perspective from Group Theory
Arnab Paul
Suresh Venkatasubramanian
313
21
0
20 Dec 2014
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial ExamplesInternational Conference on Learning Representations (ICLR), 2014
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
1.6K
21,036
0
20 Dec 2014
Self-taught Object Localization with Deep Networks
Self-taught Object Localization with Deep NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2014
Loris Bazzani
Alessandro Bergamo
Dragomir Anguelov
Lorenzo Torresani
SSLObjD
598
158
0
13 Sep 2014
Previous
123...282930