ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.02565
  4. Cited By
Prioritized training on points that are learnable, worth learning, and
  not yet learned (workshop version)

Prioritized training on points that are learnable, worth learning, and not yet learned (workshop version)

6 July 2021
Sören Mindermann
Muhammed Razzak
Winnie Xu
Andreas Kirsch
Mrinank Sharma
Adrien Morisot
Aidan N. Gomez
Sebastian Farquhar
J. Brauner
Y. Gal
ArXivPDFHTML

Papers citing "Prioritized training on points that are learnable, worth learning, and not yet learned (workshop version)"

4 / 4 papers shown
Title
Learning functional sections in medical conversations: iterative
  pseudo-labeling and human-in-the-loop approach
Learning functional sections in medical conversations: iterative pseudo-labeling and human-in-the-loop approach
Mengqian Wang
Ilya Valmianski
X. Amatriain
Anitha Kannan
18
2
0
06 Oct 2022
Test Distribution-Aware Active Learning: A Principled Approach Against
  Distribution Shift and Outliers
Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers
Andreas Kirsch
Tom Rainforth
Y. Gal
OOD
TTA
24
22
0
22 Jun 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,109
0
06 Jun 2015
1