ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.14658
  4. Cited By
Evaluate What You Can't Evaluate: Unassessable Quality for Generated
  Response

Evaluate What You Can't Evaluate: Unassessable Quality for Generated Response

24 May 2023
Yongkang Liu
Shi Feng
Daling Wang
Yifei Zhang
Hinrich Schütze
    ALM
    ELM
ArXivPDFHTML

Papers citing "Evaluate What You Can't Evaluate: Unassessable Quality for Generated Response"

2 / 2 papers shown
Title
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
306
11,909
0
04 Mar 2022
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
ALM
185
79
0
30 Apr 2021
1