ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.07867
  4. Cited By
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented
  Generation of Large Language Models

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

12 February 2024
Wei Zou
Runpeng Geng
Binghui Wang
Jinyuan Jia
    SILM
ArXivPDFHTML

Papers citing "PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models"

18 / 18 papers shown
Title
Hoist with His Own Petard: Inducing Guardrails to Facilitate Denial-of-Service Attacks on Retrieval-Augmented Generation of LLMs
Hoist with His Own Petard: Inducing Guardrails to Facilitate Denial-of-Service Attacks on Retrieval-Augmented Generation of LLMs
Pan Suo
Yu-ming Shang
San-Chuan Guo
Xi Zhang
SILM
AAML
45
0
0
30 Apr 2025
Prompt Injection Attack to Tool Selection in LLM Agents
Prompt Injection Attack to Tool Selection in LLM Agents
Jiawen Shi
Zenghui Yuan
Guiyao Tie
Pan Zhou
Neil Zhenqiang Gong
Lichao Sun
LLMAG
51
0
0
28 Apr 2025
Frontier AI's Impact on the Cybersecurity Landscape
Frontier AI's Impact on the Cybersecurity Landscape
Wenbo Guo
Yujin Potter
Tianneng Shi
Zhun Wang
Andy Zhang
Dawn Song
50
1
0
07 Apr 2025
A Practical Memory Injection Attack against LLM Agents
Shen Dong
Shaocheng Xu
Pengfei He
Y. Li
Jiliang Tang
Tianming Liu
Hui Liu
Zhen Xiang
LLMAG
AAML
43
2
0
05 Mar 2025
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Ang Li
Yin Zhou
Vethavikashini Chithrra Raghuram
Tom Goldstein
Micah Goldblum
AAML
71
7
0
12 Feb 2025
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models
Zhuo Chen
Y. Gong
Miaokun Chen
Haotan Liu
Qikai Cheng
Fan Zhang
Wei-Tsung Lu
Xiaozhong Liu
J. Liu
XiaoFeng Wang
AAML
39
1
0
06 Jan 2025
Analysing the Residual Stream of Language Models Under Knowledge Conflicts
Analysing the Residual Stream of Language Models Under Knowledge Conflicts
Yu Zhao
Xiaotang Du
Giwon Hong
Aryo Pradipta Gema
Alessio Devoto
Hongru Wang
Xuanli He
Kam-Fai Wong
Pasquale Minervini
KELM
37
1
0
21 Oct 2024
Ward: Provable RAG Dataset Inference via LLM Watermarks
Ward: Provable RAG Dataset Inference via LLM Watermarks
Nikola Jovanović
Robin Staab
Maximilian Baader
Martin Vechev
56
1
0
04 Oct 2024
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Hanrong Zhang
Jingyuan Huang
Kai Mei
Yifei Yao
Zhenting Wang
Chenlu Zhan
Hongwei Wang
Yongfeng Zhang
AAML
LLMAG
ELM
48
18
0
03 Oct 2024
Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents
Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents
Avital Shafran
R. Schuster
Vitaly Shmatikov
37
27
0
09 Jun 2024
Dialectical Alignment: Resolving the Tension of 3H and Security Threats
  of LLMs
Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Shu Yang
Jiayuan Su
Han Jiang
Mengdi Li
Keyuan Cheng
Muhammad Asif Ali
Lijie Hu
Di Wang
16
5
0
30 Mar 2024
Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Stav Cohen
Ron Bitton
Ben Nassi
25
17
0
05 Mar 2024
An Interdisciplinary Outlook on Large Language Models for Scientific
  Research
An Interdisciplinary Outlook on Large Language Models for Scientific Research
James Boyko
Joseph Cohen
Nathan Fox
Maria Han Veiga
Jennifer I-Hsiu Li
...
Andreas H. Rauch
Kenneth N. Reid
Soumi Tribedi
Anastasia Visheratina
Xin Xie
36
17
0
03 Nov 2023
Self-RAG: Learning to Retrieve, Generate, and Critique through
  Self-Reflection
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Akari Asai
Zeqiu Wu
Yizhong Wang
Avirup Sil
Hannaneh Hajishirzi
RALM
144
600
0
17 Oct 2023
Poisoning Language Models During Instruction Tuning
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
90
124
0
01 May 2023
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
139
68
0
04 May 2021
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information
  Retrieval Models
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Nandan Thakur
Nils Reimers
Andreas Rucklé
Abhishek Srivastava
Iryna Gurevych
VLM
229
961
0
17 Apr 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
1