ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07803
  4. Cited By
Attacking Open-domain Question Answering by Injecting Misinformation

Attacking Open-domain Question Answering by Injecting Misinformation

15 October 2021
Liangming Pan
Wenhu Chen
Min-Yen Kan
W. Wang
    HILM
    AAML
ArXivPDFHTML

Papers citing "Attacking Open-domain Question Answering by Injecting Misinformation"

9 / 9 papers shown
Title
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models
Peter Carragher
Nikitha Rao
Abhinand Jha
R Raghav
Kathleen M. Carley
VLM
45
0
0
19 Feb 2025
Mitigating GenAI-powered Evidence Pollution for Out-of-Context Multimodal Misinformation Detection
Mitigating GenAI-powered Evidence Pollution for Out-of-Context Multimodal Misinformation Detection
Zehong Yan
Peng Qi
W. Hsu
M. Lee
33
0
0
24 Jan 2025
Human-inspired Perspectives: A Survey on AI Long-term Memory
Human-inspired Perspectives: A Survey on AI Long-term Memory
Zihong He
Weizhe Lin
Hao Zheng
Fan Zhang
Matt Jones
Laurence Aitchison
X. Xu
Miao Liu
Per Ola Kristensson
Junxiao Shen
42
2
0
01 Nov 2024
Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering
Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering
Yu Zhao
Alessio Devoto
Giwon Hong
Xiaotang Du
Aryo Pradipta Gema
Hongru Wang
Xuanli He
Kam-Fai Wong
Pasquale Minervini
KELM
LLMSV
22
14
0
21 Oct 2024
Open Domain Question Answering with Conflicting Contexts
Open Domain Question Answering with Conflicting Contexts
Siyi Liu
Qiang Ning
Kishaloy Halder
Wei Xiao
Zheng Qi
...
Yi Zhang
Neha Anna John
Bonan Min
Yassine Benajiba
Dan Roth
LLMAG
57
2
0
16 Oct 2024
On the Risk of Misinformation Pollution with Large Language Models
On the Risk of Misinformation Pollution with Large Language Models
Yikang Pan
Liangming Pan
Wenhu Chen
Preslav Nakov
Min-Yen Kan
W. Wang
DeLMO
185
105
0
23 May 2023
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
  Fact-Verification Systems
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Sahar Abdelnabi
Mario Fritz
AAML
169
4
0
07 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Entity-Based Knowledge Conflicts in Question Answering
Entity-Based Knowledge Conflicts in Question Answering
Shayne Longpre
Kartik Perisetla
Anthony Chen
Nikhil Ramesh
Chris DuBois
Sameer Singh
HILM
230
177
0
10 Sep 2021
1