ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11919
  4. Cited By
What's Wrong? Refining Meeting Summaries with LLM Feedback

What's Wrong? Refining Meeting Summaries with LLM Feedback

16 July 2024
Frederic Kirstein
Terry Ruas
Bela Gipp
ArXivPDFHTML

Papers citing "What's Wrong? Refining Meeting Summaries with LLM Feedback"

3 / 3 papers shown
Title
N-Critics: Self-Refinement of Large Language Models with Ensemble of
  Critics
N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics
Sajad Mousavi
Ricardo Luna Gutierrez
Desik Rengarajan
Vineet Gundecha
Ashwin Ramesh Babu
Avisek Naug
Antonio Guillen-Perez
S. Sarkar
LRM
HILM
KELM
18
6
0
28 Oct 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
A Context-Enhanced De-identification System
A Context-Enhanced De-identification System
Kahyun Lee
M. Kayaalp
Sam Henry
Özlem Uzuner
31
3
0
17 Feb 2021
1