Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.13788
Cited By
Can LLM-Generated Misinformation Be Detected?
25 September 2023
Canyu Chen
Kai Shu
DeLMO
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can LLM-Generated Misinformation Be Detected?"
50 / 52 papers shown
Title
Robust Misinformation Detection by Visiting Potential Commonsense Conflict
Bing Wang
Ximing Li
C. Li
Bingrui Zhao
Bo Fu
Renchu Guan
Shengsheng Wang
38
0
0
30 Apr 2025
Detecting Manipulated Contents Using Knowledge-Grounded Inference
Mark Huasong Meng
Ruizhe Wang
Meng Xu
Chuan Yan
Guangdong Bai
38
0
0
29 Apr 2025
LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation
Beizhe Hu
Qiang Sheng
Juan Cao
Yang Li
Danding Wang
42
0
0
28 Apr 2025
Unified Attacks to Large Language Model Watermarks: Spoofing and Scrubbing in Unauthorized Knowledge Distillation
Xin Yi
Shunfan Zhengc
Linlin Wanga
Xiaoling Wang
Liang He
Liang He
AAML
51
0
0
24 Apr 2025
FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models
Hongzhan Lin
Yang Deng
Yuxuan Gu
Wenxuan Zhang
Jing Ma
See-Kiong Ng
Tat-Seng Chua
LLMAG
KELM
HILM
55
0
0
25 Feb 2025
MindAligner: Explicit Brain Functional Alignment for Cross-Subject Visual Decoding from Limited fMRI Data
Yuqin Dai
Zhouheng Yao
Chunfeng Song
Qihao Zheng
Weijian Mai
Kunyu Peng
Shuai Lu
Wanli Ouyang
Jian Yang
Jiamin Wu
64
1
0
07 Feb 2025
Fake News Detection After LLM Laundering: Measurement and Explanation
Rupak Kumar Das
Jonathan Dodge
77
0
0
29 Jan 2025
Mitigating GenAI-powered Evidence Pollution for Out-of-Context Multimodal Misinformation Detection
Zehong Yan
Peng Qi
W. Hsu
M. Lee
39
0
0
24 Jan 2025
Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals
Qingyang Wu
Ying Xu
Tingsong Xiao
Yunze Xiao
Yitong Li
...
Yichi Zhang
Shanghai Zhong
Yuwei Zhang
Wei Lu
Yifan Yang
64
1
0
17 Jan 2025
DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts
Tobias Braun
Mark Rothermel
Marcus Rohrbach
Anna Rohrbach
81
1
0
13 Dec 2024
Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting
Can Chen
Jun-Kun Wang
DeLMO
30
0
0
29 Oct 2024
Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering
Yu Zhao
Alessio Devoto
Giwon Hong
Xiaotang Du
Aryo Pradipta Gema
Hongru Wang
Xuanli He
Kam-Fai Wong
Pasquale Minervini
KELM
LLMSV
32
14
0
21 Oct 2024
Analysing the Residual Stream of Language Models Under Knowledge Conflicts
Yu Zhao
Xiaotang Du
Giwon Hong
Aryo Pradipta Gema
Alessio Devoto
Hongru Wang
Xuanli He
Kam-Fai Wong
Pasquale Minervini
KELM
32
1
0
21 Oct 2024
Unveiling Large Language Models Generated Texts: A Multi-Level Fine-Grained Detection Framework
Zhen Tao
Zhiyu Li
Runyu Chen
Dinghao Xi
Wei Xu
DeLMO
11
1
0
18 Oct 2024
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
47
1
0
05 Sep 2024
Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Kathleen C. Fraser
Hillary Dawkins
S. Kiritchenko
DeLMO
61
7
0
21 Jun 2024
MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs
Xuannan Liu
Zekun Li
Peipei Li
Shuhan Xia
Xing Cui
Linzhi Huang
Huaibo Huang
Weihong Deng
Zhaofeng He
36
11
0
13 Jun 2024
Unlearning Climate Misinformation in Large Language Models
Michael Fore
Simranjit Singh
Chaehong Lee
Amritanshu Pandey
Antonios Anastasopoulos
Dimitrios Stamoulis
MU
36
1
0
29 May 2024
User-Friendly Customized Generation with Multi-Modal Prompts
Linhao Zhong
Yan Hong
Wentao Chen
Binglin Zhou
Yiyi Zhang
Jianfu Zhang
Liqing Zhang
DiffM
27
0
0
26 May 2024
Navigating LLM Ethics: Advancements, Challenges, and Future Directions
Junfeng Jiao
S. Afroogh
Yiming Xu
Connor Phillips
AILaw
55
19
0
14 May 2024
"I'm categorizing LLM as a productivity tool": Examining ethics of LLM use in HCI research practices
Shivani Kapania
Ruiyi Wang
Toby Jia-Jun Li
Tianshi Li
Hong Shen
23
6
0
28 Mar 2024
Hallucination Detection in Foundation Models for Decision-Making: A Flexible Definition and Review of the State of the Art
Neeloy Chakraborty
Melkior Ornik
Katherine Driggs-Campbell
LRM
50
9
0
25 Mar 2024
EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
Weikang Zhou
Xiao Wang
Limao Xiong
Han Xia
Yingshuang Gu
...
Lijun Li
Jing Shao
Tao Gui
Qi Zhang
Xuanjing Huang
71
29
0
18 Mar 2024
Retrieval-Augmented Generation for AI-Generated Content: A Survey
Penghao Zhao
Hailin Zhang
Qinhan Yu
Zhengren Wang
Yunteng Geng
Fangcheng Fu
Ling Yang
Wentao Zhang
Jie Jiang
Bin Cui
3DV
101
215
0
29 Feb 2024
TELEClass: Taxonomy Enrichment and LLM-Enhanced Hierarchical Text Classification with Minimal Supervision
Yunyi Zhang
Ruozhen Yang
Xueqiang Xu
Rui Li
Jinfeng Xiao
Jiaming Shen
Jiawei Han
33
9
0
29 Feb 2024
Large Language Models are Vulnerable to Bait-and-Switch Attacks for Generating Harmful Content
Federico Bianchi
James Y. Zou
26
4
0
21 Feb 2024
Detecting Multimedia Generated by Large AI Models: A Survey
Li Lin
Neeraj Gupta
Yue Zhang
Hainan Ren
Chun-Hao Liu
Feng Ding
Xin Eric Wang
X. Li
Luisa Verdoliva
Shu Hu
75
53
0
22 Jan 2024
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Umar Iqbal
Tadayoshi Kohno
Franziska Roesner
ELM
SILM
54
41
0
19 Sep 2023
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts
Jiahao Yu
Xingwei Lin
Zheng Yu
Xinyu Xing
SILM
110
292
0
19 Sep 2023
Can Large Language Models Understand Real-World Complex Instructions?
Qi He
Jie Zeng
Wenhao Huang
Lina Chen
Jin Xiao
...
Shisong Chen
Yikai Zhang
Zhouhong Gu
Jiaqing Liang
Yanghua Xiao
ALM
LRM
ELM
87
50
0
17 Sep 2023
On the Risk of Misinformation Pollution with Large Language Models
Yikang Pan
Liangming Pan
Wenhu Chen
Preslav Nakov
Min-Yen Kan
W. Wang
DeLMO
188
105
0
23 May 2023
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
75
246
0
22 May 2023
Watermarking Text Generated by Black-Box Language Models
Xi Yang
Kejiang Chen
Weiming Zhang
Chang-rui Liu
Yuang Qi
Jie Zhang
Han Fang
Neng H. Yu
WaLM
92
53
0
14 May 2023
Robust Multi-bit Natural Language Watermarking through Invariant Features
Kiyoon Yoo
Wonhyuk Ahn
Jiho Jang
Nojun Kwak
WaLM
128
47
0
03 May 2023
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
Jingfeng Yang
Hongye Jin
Ruixiang Tang
Xiaotian Han
Qizhang Feng
Haoming Jiang
Bing Yin
Xia Hu
LM&MA
123
593
0
26 Apr 2023
AI, write an essay for me: A large-scale comparison of human-written versus ChatGPT-generated essays
Steffen Herbold
Annette Hautli-Janisz
Ute Heuer
Zlata Kikteva
Alexander Trautsch
DeLMO
66
22
0
24 Apr 2023
CHEAT: A Large-scale Dataset for Detecting ChatGPT-writtEn AbsTracts
Peipeng Yu
Jiahan Chen
Xuan Feng
Zhihua Xia
55
33
0
24 Apr 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
197
2,232
0
22 Mar 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
145
386
0
15 Mar 2023
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
61
59
0
14 Oct 2022
Adversarial Contrastive Learning for Evidence-aware Fake News Detection with Graph Neural Networks
Jun Wu
Weizhi Xu
Qiang Liu
Shu Wu
Liang Wang
GNN
26
18
0
11 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
240
1,070
0
05 Oct 2022
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
26
31
0
30 Sep 2022
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
222
495
0
28 Sep 2022
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
Deep Ganguli
Liane Lovitt
John Kernion
Amanda Askell
Yuntao Bai
...
Nicholas Joseph
Sam McCandlish
C. Olah
Jared Kaplan
Jack Clark
216
327
0
23 Aug 2022
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
203
1,651
0
15 Oct 2021
Artificial Text Detection via Examining the Topology of Attention Maps
Laida Kushnareva
D. Cherniavskii
Vladislav Mikhailov
Ekaterina Artemova
S. Barannikov
A. Bernstein
Irina Piontkovskaya
D. Piontkovski
Evgeny Burnaev
31
48
0
10 Sep 2021
A Survey on Stance Detection for Mis- and Disinformation Identification
Momchil Hardalov
Arnav Arora
Preslav Nakov
Isabelle Augenstein
106
132
0
27 Feb 2021
1
2
Next