ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.02841
  4. Cited By
Researching Alignment Research: Unsupervised Analysis

Researching Alignment Research: Unsupervised Analysis

6 June 2022
Jan H. Kirchner
Logan Smith
Jacques Thibodeau
Kyle McDonell
Laria Reynolds
ArXivPDFHTML

Papers citing "Researching Alignment Research: Unsupervised Analysis"

2 / 2 papers shown
Title
Alignment with human representations supports robust few-shot learning
Alignment with human representations supports robust few-shot learning
Ilia Sucholutsky
Thomas L. Griffiths
24
25
0
27 Jan 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
339
12,003
0
04 Mar 2022
1