ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.04614
  4. Cited By
Better Alignment with Instruction Back-and-Forth Translation

Better Alignment with Instruction Back-and-Forth Translation

8 August 2024
Thao Nguyen
Jeffrey Li
Sewoong Oh
Ludwig Schmidt
Jason Weston
Luke Zettlemoyer
Xian Li
    SyDa
ArXivPDFHTML

Papers citing "Better Alignment with Instruction Back-and-Forth Translation"

2 / 2 papers shown
Title
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for
  Instruction Fine-Tuning
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning
Hao Zhao
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
ALM
89
44
0
07 Feb 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1