ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.17196
  4. Cited By
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered
  Applications

Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications

26 April 2024
Quan Zhang
Binqi Zeng
Chijin Zhou
Gwihwan Go
Heyuan Shi
Yu Jiang
    SILM
    AAML
ArXivPDFHTML

Papers citing "Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications"

2 / 2 papers shown
Title
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
Yang Jiao
X. Wang
Kai Yang
AAML
SILM
31
0
0
10 Apr 2025
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models
Zhuo Chen
Y. Gong
Miaokun Chen
Haotan Liu
Qikai Cheng
Fan Zhang
Wei-Tsung Lu
Xiaozhong Liu
J. Liu
XiaoFeng Wang
AAML
44
1
0
06 Jan 2025
1