ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.04709
  4. Cited By
Quantifying the Preferential Direction of the Model Gradient in
  Adversarial Training With Projected Gradient Descent

Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

10 September 2020
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
ArXivPDFHTML

Papers citing "Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent"

1 / 1 papers shown
Title
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
219
676
0
19 Oct 2020
1