ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11113
  4. Cited By
Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks
v1v2v3 (latest)

Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks

8 June 2025
Tzu-Ling Lin
Wei Chen
Teng-Fang Hsiao
Hou-I Liu
Ya-Hsin Yeh
Yu Kai Chan
Wen-Sheng Lien
Po-Yen Kuo
Philip S. Yu
Hong-Han Shuai
    AAML
ArXiv (abs)PDFHTMLGithub

Papers citing "Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks"

2 / 2 papers shown
LLM-REVal: Can We Trust LLM Reviewers Yet?
LLM-REVal: Can We Trust LLM Reviewers Yet?
Rui Li
Jia-Chen Gu
Po-Nien Kung
H. Xia
Junfeng Liu
Xiangwen Kong
Zhifang Sui
Nanyun Peng
177
2
0
14 Oct 2025
The More You Automate, the Less You See: Hidden Pitfalls of AI Scientist Systems
The More You Automate, the Less You See: Hidden Pitfalls of AI Scientist Systems
Ziming Luo
Atoosa Kasirzadeh
Nihar B. Shah
223
7
0
10 Sep 2025
1
Page 1 of 1