Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2506.11113
Cited By
v1
v2
v3 (latest)
Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks
8 June 2025
Tzu-Ling Lin
Wei Chen
Teng-Fang Hsiao
Hou-I Liu
Ya-Hsin Yeh
Yu Kai Chan
Wen-Sheng Lien
Po-Yen Kuo
Philip S. Yu
Hong-Han Shuai
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github
Papers citing
"Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks"
2 / 2 papers shown
LLM-REVal: Can We Trust LLM Reviewers Yet?
Rui Li
Jia-Chen Gu
Po-Nien Kung
H. Xia
Junfeng Liu
Xiangwen Kong
Zhifang Sui
Nanyun Peng
177
2
0
14 Oct 2025
The More You Automate, the Less You See: Hidden Pitfalls of AI Scientist Systems
Ziming Luo
Atoosa Kasirzadeh
Nihar B. Shah
223
7
0
10 Sep 2025
1
Page 1 of 1