ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.15246
  4. Cited By
A Classification-Guided Approach for Adversarial Attacks against Neural
  Machine Translation
v1v2 (latest)

A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation

Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
29 August 2023
Sahar Sadrizadeh
Ljiljana Dolamic
P. Frossard
    AAMLSILM
ArXiv (abs)PDFHTMLGithub

Papers citing "A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation"

3 / 3 papers shown
Searching for Difficult-to-Translate Test Examples at Scale
Searching for Difficult-to-Translate Test Examples at Scale
Wenda Xu
Vilém Zouhar
Parker Riley
Mara Finkelstein
Markus Freitag
Daniel Deutsch
AAML
193
0
0
30 Sep 2025
Towards Inclusive Toxic Content Moderation: Addressing Vulnerabilities to Adversarial Attacks in Toxicity Classifiers Tackling LLM-generated Content
Towards Inclusive Toxic Content Moderation: Addressing Vulnerabilities to Adversarial Attacks in Toxicity Classifiers Tackling LLM-generated Content
Shaz Furniturewala
Arkaitz Zubiaga
AAML
258
0
0
16 Sep 2025
LoFT: Local Proxy Fine-tuning For Improving Transferability Of
  Adversarial Attacks Against Large Language Model
LoFT: Local Proxy Fine-tuning For Improving Transferability Of Adversarial Attacks Against Large Language Model
Muhammad Ahmed Shah
Roshan S. Sharma
Hira Dhamyal
R. Olivier
Ankit Shah
...
Massa Baali
Soham Deshmukh
Michael Kuhlmann
Bhiksha Raj
Rita Singh
AAML
171
24
0
02 Oct 2023
1
Page 1 of 1