ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07137
  4. Cited By
Prioritized Training on Points that are Learnable, Worth Learning, and
  Not Yet Learnt

Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

14 June 2022
Sören Mindermann
J. Brauner
Muhammed Razzak
Mrinank Sharma
Andreas Kirsch
Winnie Xu
Benedikt Höltgen
Aidan N. Gomez
Adrien Morisot
Sebastian Farquhar
Y. Gal
ArXivPDFHTML

Papers citing "Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt"

12 / 112 papers shown
Title
The Impact of Data Corruption on Named Entity Recognition for
  Low-resourced Languages
The Impact of Data Corruption on Named Entity Recognition for Low-resourced Languages
Manuel A. Fokam
Michael Beukman
19
0
0
09 Aug 2022
Unifying Approaches in Active Learning and Active Sampling via Fisher
  Information and Information-Theoretic Quantities
Unifying Approaches in Active Learning and Active Sampling via Fisher Information and Information-Theoretic Quantities
Andreas Kirsch
Y. Gal
FedML
22
21
0
01 Aug 2022
Marginal and Joint Cross-Entropies & Predictives for Online Bayesian
  Inference, Active Learning, and Active Sampling
Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling
Andreas Kirsch
Jannik Kossen
Y. Gal
UQCV
BDL
34
3
0
18 May 2022
Representative Subset Selection for Efficient Fine-Tuning in
  Self-Supervised Speech Recognition
Representative Subset Selection for Efficient Fine-Tuning in Self-Supervised Speech Recognition
Abdul Hameed Azeemi
I. Qazi
Agha Ali Raza
13
0
0
18 Mar 2022
Improve Deep Image Inpainting by Emphasizing the Complexity of Missing
  Regions
Improve Deep Image Inpainting by Emphasizing the Complexity of Missing Regions
Yufeng Wang
Dan Li
Cong Xu
Min Yang
14
0
0
13 Feb 2022
Prioritized training on points that are learnable, worth learning, and
  not yet learned (workshop version)
Prioritized training on points that are learnable, worth learning, and not yet learned (workshop version)
Sören Mindermann
Muhammed Razzak
Winnie Xu
Andreas Kirsch
Mrinank Sharma
Adrien Morisot
Aidan N. Gomez
Sebastian Farquhar
J. Brauner
Y. Gal
14
6
0
06 Jul 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,815
0
17 Sep 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,187
0
23 Aug 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
267
404
0
09 Apr 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,109
0
06 Jun 2015
Previous
123