ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.00043
  4. Cited By
Biased Importance Sampling for Deep Neural Network Training

Biased Importance Sampling for Deep Neural Network Training

31 May 2017
Angelos Katharopoulos
François Fleuret
ArXivPDFHTML

Papers citing "Biased Importance Sampling for Deep Neural Network Training"

17 / 17 papers shown
Title
Randomized Pairwise Learning with Adaptive Sampling: A PAC-Bayes Analysis
Randomized Pairwise Learning with Adaptive Sampling: A PAC-Bayes Analysis
Sijia Zhou
Yunwen Lei
Ata Kabán
36
0
0
03 Apr 2025
Multiple Importance Sampling for Stochastic Gradient Estimation
Multiple Importance Sampling for Stochastic Gradient Estimation
Corentin Salaün
Xingchang Huang
Iliyan Georgiev
Niloy J. Mitra
Gurprit Singh
32
1
0
22 Jul 2024
A Negative Result on Gradient Matching for Selective Backprop
A Negative Result on Gradient Matching for Selective Backprop
Lukas Balles
Cédric Archambeau
Giovanni Zappella
47
0
0
08 Dec 2023
REDUCR: Robust Data Downsampling Using Class Priority Reweighting
REDUCR: Robust Data Downsampling Using Class Priority Reweighting
William Bankes
George Hughes
Ilija Bogunovic
Zi Wang
38
3
0
01 Dec 2023
Computing Approximate $\ell_p$ Sensitivities
Computing Approximate ℓp\ell_pℓp​ Sensitivities
Swati Padmanabhan
David P. Woodruff
Qiuyi Zhang
58
0
0
07 Nov 2023
On Efficient Training of Large-Scale Deep Learning Models: A Literature
  Review
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
35
41
0
07 Apr 2023
Rank-based Decomposable Losses in Machine Learning: A Survey
Rank-based Decomposable Losses in Machine Learning: A Survey
Shu Hu
Xin Wang
Siwei Lyu
40
32
0
18 Jul 2022
Prioritized Training on Points that are Learnable, Worth Learning, and
  Not Yet Learnt
Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Sören Mindermann
J. Brauner
Muhammed Razzak
Mrinank Sharma
Andreas Kirsch
...
Benedikt Höltgen
Aidan Gomez
Adrien Morisot
Sebastian Farquhar
Y. Gal
67
152
0
14 Jun 2022
Diminishing Empirical Risk Minimization for Unsupervised Anomaly
  Detection
Diminishing Empirical Risk Minimization for Unsupervised Anomaly Detection
Shaoshen Wang
Yanbin Liu
Ling Chen
Chengqi Zhang
24
0
0
29 May 2022
DELTA: Diverse Client Sampling for Fasting Federated Learning
DELTA: Diverse Client Sampling for Fasting Federated Learning
Lung-Chuang Wang
Yongxin Guo
Tao R. Lin
Xiaoying Tang
FedML
28
23
0
27 May 2022
Observations on K-image Expansion of Image-Mixing Augmentation for
  Classification
Observations on K-image Expansion of Image-Mixing Augmentation for Classification
Joonhyun Jeong
Sungmin Cha
Jongwon Choi
Sangdoo Yun
Taesup Moon
Y. Yoo
VLM
28
6
0
08 Oct 2021
Efficient training of physics-informed neural networks via importance
  sampling
Efficient training of physics-informed neural networks via importance sampling
M. A. Nabian
R. J. Gladstone
Hadi Meidani
DiffM
PINN
75
225
0
26 Apr 2021
Adaptive Task Sampling for Meta-Learning
Adaptive Task Sampling for Meta-Learning
Chenghao Liu
Zhihao Wang
Doyen Sahoo
Yuan Fang
Kun Zhang
Guosheng Lin
45
54
0
17 Jul 2020
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for
  Heterogeneous Distributed Datasets
Adaptive Sampling Distributed Stochastic Variance Reduced Gradient for Heterogeneous Distributed Datasets
Ilqar Ramazanli
Han Nguyen
Hai Pham
Sashank J. Reddi
Barnabás Póczós
23
11
0
20 Feb 2020
Rethinking deep active learning: Using unlabeled data at model training
Rethinking deep active learning: Using unlabeled data at model training
Oriane Siméoni
Mateusz Budnik
Yannis Avrithis
G. Gravier
HAI
30
79
0
19 Nov 2019
Distribution Density, Tails, and Outliers in Machine Learning: Metrics
  and Applications
Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Nicholas Carlini
Ulfar Erlingsson
Nicolas Papernot
OOD
OODD
26
62
0
29 Oct 2019
Submodular Batch Selection for Training Deep Neural Networks
Submodular Batch Selection for Training Deep Neural Networks
K. J. Joseph
R. VamshiTeja
Krishnakant Singh
V. Balasubramanian
15
23
0
20 Jun 2019
1