ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.12325
  4. Cited By
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful Comparators

Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful Comparators

28 January 2025
Dingkang Yang
Dongling Xiao
Jinjie Wei
Mingcheng Li
Zhaoyu Chen
Ke Li
L. Zhang
    HILM
ArXivPDFHTML

Papers citing "Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful Comparators"

4 / 4 papers shown
Title
The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances
The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances
Yining Wang
Y. Wang
Xi Li
Mi Zhang
Geng Hong
Min Yang
AAML
HILM
55
0
0
01 Apr 2025
MedAide: Towards an Omni Medical Aide via Specialized LLM-based
  Multi-Agent Collaboration
MedAide: Towards an Omni Medical Aide via Specialized LLM-based Multi-Agent Collaboration
Jinjie Wei
Dingkang Yang
Yanshu Li
Qingyao Xu
Zhaoyu Chen
M. Li
Yue Jiang
Xiaolu Hou
Lihua Zhang
20
1
0
16 Oct 2024
Chain-of-Thought Reasoning Without Prompting
Chain-of-Thought Reasoning Without Prompting
Xuezhi Wang
Denny Zhou
ReLM
LRM
135
97
0
15 Feb 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1