ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13281
  4. Cited By
LM vs LM: Detecting Factual Errors via Cross Examination

LM vs LM: Detecting Factual Errors via Cross Examination

22 May 2023
Roi Cohen
May Hamri
Mor Geva
Amir Globerson
    HILM
ArXivPDFHTML

Papers citing "LM vs LM: Detecting Factual Errors via Cross Examination"

50 / 100 papers shown
Title
Large Language Models for Data Annotation: A Survey
Large Language Models for Data Annotation: A Survey
Zhen Tan
Dawei Li
Song Wang
Alimohammad Beigi
Bohan Jiang
Amrita Bhattacharjee
Mansooreh Karami
Jundong Li
Lu Cheng
Huan Liu
SyDa
42
44
0
21 Feb 2024
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
Danna Zheng
Danyang Liu
Mirella Lapata
Jeff Z. Pan
HILM
30
6
0
19 Feb 2024
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in
  Generative LLMs
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs
Yavuz Faruk Bakman
D. Yaldiz
Baturalp Buyukates
Chenyang Tao
Dimitrios Dimitriadis
A. Avestimehr
27
11
0
19 Feb 2024
Logical Closed Loop: Uncovering Object Hallucinations in Large
  Vision-Language Models
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models
Jun Wu
Qiang Liu
Ding Wang
Jinghao Zhang
Shu Wu
Liang Wang
Tien-Ping Tan
LRM
27
20
0
18 Feb 2024
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for
  Hallucination Mitigation in Large Language Models
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models
Hanxing Ding
Liang Pang
Zihao Wei
Huawei Shen
Xueqi Cheng
HILM
RALM
67
15
0
16 Feb 2024
On the Self-Verification Limitations of Large Language Models on
  Reasoning and Planning Tasks
On the Self-Verification Limitations of Large Language Models on Reasoning and Planning Tasks
Kaya Stechly
Karthik Valmeekam
Subbarao Kambhampati
ReLM
LRM
20
48
0
12 Feb 2024
INSIDE: LLMs' Internal States Retain the Power of Hallucination
  Detection
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection
Chao Chen
Kai-Chun Liu
Ze Chen
Yi Gu
Yue-bo Wu
Mingyuan Tao
Zhihang Fu
Jieping Ye
HILM
74
83
0
06 Feb 2024
Building Guardrails for Large Language Models
Building Guardrails for Large Language Models
Yizhen Dong
Ronghui Mu
Gao Jin
Yi Qi
Jinwei Hu
Xingyu Zhao
Jie Meng
Wenjie Ruan
Xiaowei Huang
OffRL
57
23
0
02 Feb 2024
LLM-based NLG Evaluation: Current Status and Challenges
LLM-based NLG Evaluation: Current Status and Challenges
Mingqi Gao
Xinyu Hu
Jie Ruan
Xiao Pu
Xiaojun Wan
ELM
LM&MA
53
28
0
02 Feb 2024
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and
  Cybersecurity
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity
Claudio Novelli
F. Casolari
Philipp Hacker
Giorgio Spedicato
Luciano Floridi
AILaw
SILM
42
41
0
14 Jan 2024
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
  Model Systems
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems
Tianyu Cui
Yanling Wang
Chuanpu Fu
Yong Xiao
Sijia Li
...
Junwu Xiong
Xinyu Kong
Zujie Wen
Ke Xu
Qi Li
52
56
0
11 Jan 2024
Self-Contrast: Better Reflection Through Inconsistent Solving
  Perspectives
Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives
Wenqi Zhang
Yongliang Shen
Linjuan Wu
Qiuying Peng
Jun Wang
Y. Zhuang
Weiming Lu
LRM
LLMAG
22
37
0
04 Jan 2024
LLM Harmony: Multi-Agent Communication for Problem Solving
LLM Harmony: Multi-Agent Communication for Problem Solving
Sumedh Rasal
LLMAG
17
20
0
02 Jan 2024
Experiential Co-Learning of Software-Developing Agents
Experiential Co-Learning of Software-Developing Agents
Cheng Qian
Yufan Dang
Jiahao Li
Wei Liu
Zihao Xie
...
Cheng Yang
Xin Cong
Xiaoyin Che
Zhiyuan Liu
Maosong Sun
LLMAG
8
40
0
28 Dec 2023
Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph
  Construction
Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph Construction
Hongbin Ye
Honghao Gui
Aijia Zhang
Tong Liu
Wei Hua
Weiqiang Jia
LLMAG
15
5
0
05 Dec 2023
ChatGPT's One-year Anniversary: Are Open-Source Large Language Models
  Catching up?
ChatGPT's One-year Anniversary: Are Open-Source Large Language Models Catching up?
Hailin Chen
Fangkai Jiao
Xingxuan Li
Chengwei Qin
Mathieu Ravaut
Ruochen Zhao
Caiming Xiong
Shafiq R. Joty
ELM
CLL
AI4MH
LRM
ALM
77
27
0
28 Nov 2023
R-Tuning: Instructing Large Language Models to Say `I Don't Know'
R-Tuning: Instructing Large Language Models to Say `I Don't Know'
Hanning Zhang
Shizhe Diao
Yong Lin
Yi Ren Fung
Qing Lian
Xingyao Wang
Yangyi Chen
Heng Ji
Tong Zhang
UQLM
27
36
0
16 Nov 2023
Digital Socrates: Evaluating LLMs through Explanation Critiques
Digital Socrates: Evaluating LLMs through Explanation Critiques
Yuling Gu
Oyvind Tafjord
Peter Clark
ELM
LRM
19
2
0
16 Nov 2023
Effective Large Language Model Adaptation for Improved Grounding and
  Citation Generation
Effective Large Language Model Adaptation for Improved Grounding and Citation Generation
Xi Ye
Ruoxi Sun
Sercan Ö. Arik
Tomas Pfister
HILM
15
25
0
16 Nov 2023
Ever: Mitigating Hallucination in Large Language Models through
  Real-Time Verification and Rectification
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
Haoqiang Kang
Juntong Ni
Huaxiu Yao
HILM
LRM
14
33
0
15 Nov 2023
A Survey on Hallucination in Large Language Models: Principles,
  Taxonomy, Challenges, and Open Questions
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
Lei Huang
Weijiang Yu
Weitao Ma
Weihong Zhong
Zhangyin Feng
...
Qianglong Chen
Weihua Peng
Xiaocheng Feng
Bing Qin
Ting Liu
LRM
HILM
31
684
0
09 Nov 2023
Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection
  Method
Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method
Yukun Zhao
Lingyong Yan
Weiwei Sun
Guoliang Xing
Chong Meng
Shuaiqiang Wang
Zhicong Cheng
Zhaochun Ren
Dawei Yin
11
34
0
27 Oct 2023
Can Large Language Models Really Improve by Self-critiquing Their Own
  Plans?
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
Karthik Valmeekam
Matthew Marquez
Subbarao Kambhampati
LRM
27
84
0
12 Oct 2023
Survey on Factuality in Large Language Models: Knowledge, Retrieval and
  Domain-Specificity
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
Cunxiang Wang
Xiaoze Liu
Yuanhao Yue
Xiangru Tang
Tianhang Zhang
...
Linyi Yang
Jindong Wang
Xing Xie
Zheng-Wei Zhang
Yue Zhang
HILM
KELM
51
170
0
11 Oct 2023
A New Benchmark and Reverse Validation Method for Passage-level
  Hallucination Detection
A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection
Shiping Yang
Renliang Sun
Xiao-Yi Wan
HILM
22
40
0
10 Oct 2023
Factuality Challenges in the Era of Large Language Models
Factuality Challenges in the Era of Large Language Models
Isabelle Augenstein
Timothy Baldwin
Meeyoung Cha
Tanmoy Chakraborty
Giovanni Luca Ciampaglia
...
Rubén Míguez
Preslav Nakov
Dietram A. Scheufele
Shivam Sharma
Giovanni Zagni
HILM
27
31
0
08 Oct 2023
Benchmarking Cognitive Biases in Large Language Models as Evaluators
Benchmarking Cognitive Biases in Large Language Models as Evaluators
Ryan Koo
Minhwa Lee
Vipul Raheja
Jong Inn Park
Zae Myung Kim
Dongyeop Kang
ALM
30
71
0
29 Sep 2023
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of
  Language Models
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Mert Yuksekgonul
Varun Chandrasekaran
Erik Jones
Suriya Gunasekar
Ranjita Naik
Hamid Palangi
Ece Kamar
Besmira Nushi
HILM
13
39
0
26 Sep 2023
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking
  Unrelated Questions
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
Lorenzo Pacchiardi
A. J. Chan
Sören Mindermann
Ilan Moscovitz
Alexa Y. Pan
Y. Gal
Owain Evans
J. Brauner
LLMAG
HILM
17
46
0
26 Sep 2023
Chain-of-Verification Reduces Hallucination in Large Language Models
Chain-of-Verification Reduces Hallucination in Large Language Models
S. Dhuliawala
M. Komeili
Jing Xu
Roberta Raileanu
Xian Li
Asli Celikyilmaz
Jason Weston
LRM
HILM
15
171
0
20 Sep 2023
SCREWS: A Modular Framework for Reasoning with Revisions
SCREWS: A Modular Framework for Reasoning with Revisions
K. Shridhar
Harsh Jhamtani
Hao Fang
Benjamin Van Durme
Jason Eisner
Patrick Xia
KELM
LRM
19
14
0
20 Sep 2023
Cognitive Mirage: A Review of Hallucinations in Large Language Models
Cognitive Mirage: A Review of Hallucinations in Large Language Models
Hongbin Ye
Tong Liu
Aijia Zhang
Wei Hua
Weiqiang Jia
HILM
31
76
0
13 Sep 2023
Siren's Song in the AI Ocean: A Survey on Hallucination in Large
  Language Models
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Yue Zhang
Yafu Li
Leyang Cui
Deng Cai
Lemao Liu
...
Longyue Wang
A. Luu
Wei Bi
Freda Shi
Shuming Shi
RALM
LRM
HILM
36
507
0
03 Sep 2023
Automatically Correcting Large Language Models: Surveying the landscape
  of diverse self-correction strategies
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies
Liangming Pan
Michael Stephen Saxon
Wenda Xu
Deepak Nathani
Xinyi Wang
William Yang Wang
KELM
LRM
28
200
0
06 Aug 2023
ChatDev: Communicative Agents for Software Development
ChatDev: Communicative Agents for Software Development
Cheng Qian
Wei Liu
Hongzhang Liu
Nuo Chen
Yufan Dang
...
Xin Cong
Juyuan Xu
Dahai Li
Zhiyuan Liu
Maosong Sun
LLMAG
25
147
0
16 Jul 2023
Evaluating Superhuman Models with Consistency Checks
Evaluating Superhuman Models with Consistency Checks
Lukas Fluri
Daniel Paleka
Florian Tramèr
ELM
29
41
0
16 Jun 2023
Self-contradictory Hallucinations of Large Language Models: Evaluation,
  Detection and Mitigation
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler
Jingxuan He
Slobodan Jenko
Martin Vechev
HILM
13
107
0
25 May 2023
Crawling the Internal Knowledge-Base of Language Models
Crawling the Internal Knowledge-Base of Language Models
Roi Cohen
Mor Geva
Jonathan Berant
Amir Globerson
170
74
0
30 Jan 2023
Training Language Models with Memory Augmentation
Training Language Models with Memory Augmentation
Zexuan Zhong
Tao Lei
Danqi Chen
RALM
226
126
0
25 May 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive
  Explanations
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLM
LRM
206
189
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
Unsolved Problems in ML Safety
Unsolved Problems in ML Safety
Dan Hendrycks
Nicholas Carlini
John Schulman
Jacob Steinhardt
164
268
0
28 Sep 2021
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
ALM
185
79
0
30 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
258
343
0
01 Feb 2021
Calibration of Pre-trained Transformers
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
234
288
0
17 Mar 2020
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
396
2,576
0
03 Sep 2019
AI safety via debate
AI safety via debate
G. Irving
Paul Christiano
Dario Amodei
199
199
0
02 May 2018
Previous
12