ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2602.16752
  4. Cited By
The Vulnerability of LLM Rankers to Prompt Injection Attacks

The Vulnerability of LLM Rankers to Prompt Injection Attacks

18 February 2026
Yu Yin
Shuai Wang
Bevan Koopman
Guido Zuccon
    SILMAAML
ArXiv (abs)PDFHTMLGithub (1★)

Papers citing "The Vulnerability of LLM Rankers to Prompt Injection Attacks"

0 / 0 papers shown

No papers found

Page 1 of 0