ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.08488
  4. Cited By
Lynx: An Open Source Hallucination Evaluation Model

Lynx: An Open Source Hallucination Evaluation Model

11 July 2024
Selvan Sunitha Ravi
B. Mielczarek
Anand Kannappan
Douwe Kiela
Rebecca Qian
    VLM
    RALM
    HILM
ArXivPDFHTML

Papers citing "Lynx: An Open Source Hallucination Evaluation Model"

5 / 5 papers shown
Title
Can LLMs Be Trusted for Evaluating RAG Systems? A Survey of Methods and Datasets
Can LLMs Be Trusted for Evaluating RAG Systems? A Survey of Methods and Datasets
Lorenz Brehme
Thomas Ströhle
Ruth Breu
40
0
0
28 Apr 2025
JudgeLM: Fine-tuned Large Language Models are Scalable Judges
JudgeLM: Fine-tuned Large Language Models are Scalable Judges
Lianghui Zhu
Xinggang Wang
Xinlong Wang
ELM
ALM
50
48
0
26 Oct 2023
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in
  Abstractive Summarization
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
156
116
0
30 Aug 2021
PubMedQA: A Dataset for Biomedical Research Question Answering
PubMedQA: A Dataset for Biomedical Research Question Answering
Qiao Jin
Bhuwan Dhingra
Zhengping Liu
William W. Cohen
Xinghua Lu
180
554
0
13 Sep 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
371
2,216
0
03 Sep 2019
1