ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.12748
  4. Cited By
SLM Meets LLM: Balancing Latency, Interpretability and Consistency in
  Hallucination Detection

SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection

22 August 2024
Mengya Hu
Rui Xu
Deren Lei
Yaxi Li
Mingyu Wang
Emily Ching
Eslam Kamal
Alex Deng
ArXivPDFHTML

Papers citing "SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection"

3 / 3 papers shown
Title
SelectLLM: Query-Aware Efficient Selection Algorithm for Large Language Models
SelectLLM: Query-Aware Efficient Selection Algorithm for Large Language Models
Kaushal Kumar Maurya
KV Aditya Srivatsa
Ekaterina Kochmar
36
2
0
16 Aug 2024
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in
  Abstractive Summarization
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
170
144
0
30 Aug 2021
Understanding Factuality in Abstractive Summarization with FRANK: A
  Benchmark for Factuality Metrics
Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics
Artidoro Pagnoni
Vidhisha Balachandran
Yulia Tsvetkov
HILM
215
305
0
27 Apr 2021
1