ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.00578
  4. Cited By
The Real, the Better: Aligning Large Language Models with Online Human
  Behaviors

The Real, the Better: Aligning Large Language Models with Online Human Behaviors

1 May 2024
Guanying Jiang
Lingyong Yan
Haibo Shi
Dawei Yin
ArXivPDFHTML

Papers citing "The Real, the Better: Aligning Large Language Models with Online Human Behaviors"

2 / 2 papers shown
Title
Understanding Layer Significance in LLM Alignment
Understanding Layer Significance in LLM Alignment
Guangyuan Shi
Zexin Lu
Xiaoyu Dong
Wenlong Zhang
Xuanyu Zhang
Yujie Feng
Xiao-Ming Wu
33
2
0
23 Oct 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1