ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.05120
  4. Cited By
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial
  Text Attacks

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

10 February 2023
Piotr Gaiñski
Klaudia Bałazy
ArXivPDFHTML

Papers citing "Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks"

5 / 5 papers shown
Title
Pointing out the Shortcomings of Relation Extraction Models with
  Semantically Motivated Adversarials
Pointing out the Shortcomings of Relation Extraction Models with Semantically Motivated Adversarials
Gennaro Nolano
Moritz Blum
Basil Ell
Philipp Cimiano
27
1
0
29 Feb 2024
Privacy in Large Language Models: Attacks, Defenses and Future
  Directions
Privacy in Large Language Models: Attacks, Defenses and Future Directions
Haoran Li
Yulin Chen
Jinglong Luo
Yan Kang
Xiaojin Zhang
Qi Hu
Chunkit Chan
Yangqiu Song
PILM
40
41
0
16 Oct 2023
Gradient-based Adversarial Attacks against Text Transformers
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
98
227
0
15 Apr 2021
Token-Modification Adversarial Attacks for Natural Language Processing:
  A Survey
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
23
12
0
01 Mar 2021
Recent Advances in Adversarial Training for Adversarial Robustness
Recent Advances in Adversarial Training for Adversarial Robustness
Tao Bai
Jinqi Luo
Jun Zhao
B. Wen
Qian Wang
AAML
73
473
0
02 Feb 2021
1