ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.08538
  4. Cited By
VertAttack: Taking advantage of Text Classifiers' horizontal vision

VertAttack: Taking advantage of Text Classifiers' horizontal vision

12 April 2024
Jonathan Rusert
    AAML
ArXivPDFHTML

Papers citing "VertAttack: Taking advantage of Text Classifiers' horizontal vision"

3 / 3 papers shown
Title
Vulnerability of LLMs to Vertically Aligned Text Manipulations
Vulnerability of LLMs to Vertically Aligned Text Manipulations
Zhecheng Li
Y. Wang
Bryan Hooi
Yujun Cai
Zhen Xiong
Nanyun Peng
Kai-Wei Chang
51
1
0
26 Oct 2024
Phrase-level Textual Adversarial Attack with Label Preservation
Phrase-level Textual Adversarial Attack with Label Preservation
Yibin Lei
Yu Cao
Dianqi Li
Tianyi Zhou
Meng Fang
Mykola Pechenizkiy
AAML
35
24
0
22 May 2022
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
243
914
0
21 Apr 2018
1